Test Report: Docker_Linux_crio_arm64 21790

                    
                      0500345ed58569c501f3381e2b1a5a0e0bac6bd7:2025-10-27:42095
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.58
35 TestAddons/parallel/Registry 15.09
36 TestAddons/parallel/RegistryCreds 0.53
37 TestAddons/parallel/Ingress 145.34
38 TestAddons/parallel/InspektorGadget 5.34
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 42.39
42 TestAddons/parallel/Headlamp 3.78
43 TestAddons/parallel/CloudSpanner 5.34
44 TestAddons/parallel/LocalPath 9.77
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.28
97 TestFunctional/parallel/ServiceCmdConnect 603.6
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.82
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
135 TestFunctional/parallel/ServiceCmd/Format 0.55
136 TestFunctional/parallel/ServiceCmd/URL 0.49
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.82
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.42
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
191 TestJSONOutput/pause/Command 2.53
197 TestJSONOutput/unpause/Command 1.65
282 TestPause/serial/Pause 8.4
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.34
350 TestStartStop/group/old-k8s-version/serial/Pause 6.88
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3
361 TestStartStop/group/no-preload/serial/Pause 6.73
365 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.19
373 TestStartStop/group/embed-certs/serial/Pause 7.14
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.97
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.44
389 TestStartStop/group/newest-cni/serial/Pause 5.79
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.3
x
+
TestAddons/serial/Volcano (0.58s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable volcano --alsologtostderr -v=1: exit status 11 (581.530891ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:17.144073 1141480 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:17.145474 1141480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:17.145504 1141480 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:17.145510 1141480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:17.145804 1141480 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:17.146096 1141480 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:17.146521 1141480 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:17.146541 1141480 addons.go:606] checking whether the cluster is paused
	I1027 22:19:17.146667 1141480 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:17.146682 1141480 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:17.147144 1141480 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:17.165644 1141480 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:17.165702 1141480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:17.182868 1141480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:17.293167 1141480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:17.293257 1141480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:17.325073 1141480 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:17.325104 1141480 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:17.325109 1141480 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:17.325112 1141480 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:17.325116 1141480 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:17.325141 1141480 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:17.325151 1141480 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:17.325155 1141480 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:17.325158 1141480 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:17.325166 1141480 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:17.325170 1141480 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:17.325173 1141480 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:17.325177 1141480 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:17.325180 1141480 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:17.325191 1141480 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:17.325216 1141480 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:17.325235 1141480 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:17.325240 1141480 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:17.325244 1141480 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:17.325247 1141480 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:17.325252 1141480 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:17.325256 1141480 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:17.325259 1141480 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:17.325262 1141480 cri.go:89] found id: ""
	I1027 22:19:17.325326 1141480 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:17.340653 1141480 out.go:203] 
	W1027 22:19:17.343580 1141480 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:17.343608 1141480 out.go:285] * 
	* 
	W1027 22:19:17.639330 1141480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:17.642481 1141480 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.326241ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004284814s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003713153s
addons_test.go:392: (dbg) Run:  kubectl --context addons-789752 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-789752 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-789752 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.512617265s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 ip
2025/10/27 22:19:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable registry --alsologtostderr -v=1: exit status 11 (303.301924ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:43.719860 1142012 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:43.720438 1142012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:43.720455 1142012 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:43.720461 1142012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:43.720743 1142012 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:43.721044 1142012 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:43.721410 1142012 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:43.721427 1142012 addons.go:606] checking whether the cluster is paused
	I1027 22:19:43.721529 1142012 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:43.721541 1142012 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:43.721999 1142012 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:43.742732 1142012 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:43.742792 1142012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:43.763049 1142012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:43.881864 1142012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:43.881964 1142012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:43.916726 1142012 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:43.916753 1142012 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:43.916758 1142012 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:43.916766 1142012 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:43.916769 1142012 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:43.916775 1142012 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:43.916779 1142012 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:43.916782 1142012 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:43.916786 1142012 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:43.916796 1142012 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:43.916802 1142012 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:43.916806 1142012 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:43.916809 1142012 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:43.916812 1142012 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:43.916815 1142012 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:43.916832 1142012 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:43.916850 1142012 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:43.916868 1142012 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:43.916877 1142012 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:43.916881 1142012 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:43.916886 1142012 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:43.916892 1142012 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:43.916895 1142012 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:43.916901 1142012 cri.go:89] found id: ""
	I1027 22:19:43.916965 1142012 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:43.935154 1142012 out.go:203] 
	W1027 22:19:43.938175 1142012 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:43.938217 1142012 out.go:285] * 
	* 
	W1027 22:19:43.947134 1142012 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:43.950169 1142012 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.09s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.138177ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-789752
addons_test.go:332: (dbg) Run:  kubectl --context addons-789752 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (282.423157ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:20:32.874052 1144038 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:20:32.874852 1144038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.874892 1144038 out.go:374] Setting ErrFile to fd 2...
	I1027 22:20:32.874912 1144038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.875206 1144038 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:20:32.875522 1144038 mustload.go:66] Loading cluster: addons-789752
	I1027 22:20:32.875950 1144038 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:32.875994 1144038 addons.go:606] checking whether the cluster is paused
	I1027 22:20:32.876129 1144038 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:32.876168 1144038 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:20:32.876673 1144038 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:20:32.902861 1144038 ssh_runner.go:195] Run: systemctl --version
	I1027 22:20:32.902927 1144038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:20:32.921000 1144038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:20:33.029716 1144038 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:20:33.029807 1144038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:20:33.068149 1144038 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:20:33.068170 1144038 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:20:33.068174 1144038 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:20:33.068178 1144038 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:20:33.068181 1144038 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:20:33.068185 1144038 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:20:33.068188 1144038 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:20:33.068193 1144038 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:20:33.068196 1144038 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:20:33.068203 1144038 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:20:33.068206 1144038 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:20:33.068210 1144038 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:20:33.068213 1144038 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:20:33.068217 1144038 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:20:33.068221 1144038 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:20:33.068230 1144038 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:20:33.068234 1144038 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:20:33.068239 1144038 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:20:33.068242 1144038 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:20:33.068246 1144038 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:20:33.068251 1144038 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:20:33.068259 1144038 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:20:33.068263 1144038 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:20:33.068268 1144038 cri.go:89] found id: ""
	I1027 22:20:33.068320 1144038 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:20:33.084057 1144038 out.go:203] 
	W1027 22:20:33.087016 1144038 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:20:33.087050 1144038 out.go:285] * 
	* 
	W1027 22:20:33.097823 1144038 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:20:33.100771 1144038 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-789752 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-789752 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-789752 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7176a34a-4645-448d-a4f4-57818624dbba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7176a34a-4645-448d-a4f4-57818624dbba] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003569076s
I1027 22:20:14.548590 1134735 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.30425435s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-789752 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-789752
helpers_test.go:243: (dbg) docker inspect addons-789752:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e",
	        "Created": "2025-10-27T22:16:56.276536241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1135892,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:16:56.341918453Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/hosts",
	        "LogPath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e-json.log",
	        "Name": "/addons-789752",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-789752:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-789752",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e",
	                "LowerDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-789752",
	                "Source": "/var/lib/docker/volumes/addons-789752/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-789752",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-789752",
	                "name.minikube.sigs.k8s.io": "addons-789752",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "812c284ee37f415262529cc381beeff44cbd597eca6c31c4139631dddd8e2112",
	            "SandboxKey": "/var/run/docker/netns/812c284ee37f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34244"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34245"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34248"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34246"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34247"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-789752": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:54:b4:b8:62:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31fd7d19f51759ab9eab49efa050974b3167d16e1fa33389a6c36af428254f1c",
	                    "EndpointID": "1ab705215d64bddc6a7e502cf91fd108b90ad95eeeb0e5728441f639fe128d5f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-789752",
	                        "a652b6a668fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-789752 -n addons-789752
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-789752 logs -n 25: (1.648014565s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-332028                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-332028 │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ start   │ --download-only -p binary-mirror-961152 --alsologtostderr --binary-mirror http://127.0.0.1:35369 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-961152   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ delete  │ -p binary-mirror-961152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-961152   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p addons-789752                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ addons  │ disable dashboard -p addons-789752                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ start   │ -p addons-789752 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:19 UTC │
	│ addons  │ addons-789752 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ ip      │ addons-789752 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │ 27 Oct 25 22:19 UTC │
	│ addons  │ addons-789752 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ ssh     │ addons-789752 ssh cat /opt/local-path-provisioner/pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │ 27 Oct 25 22:19 UTC │
	│ addons  │ addons-789752 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-789752 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:20 UTC │                     │
	│ ssh     │ addons-789752 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:20 UTC │                     │
	│ addons  │ addons-789752 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:20 UTC │                     │
	│ addons  │ addons-789752 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:20 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-789752                                                                                                                                                                                                                                                                                                                                                                                           │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:20 UTC │ 27 Oct 25 22:20 UTC │
	│ addons  │ addons-789752 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:20 UTC │                     │
	│ ip      │ addons-789752 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:22 UTC │ 27 Oct 25 22:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:16:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:16:30.580876 1135488 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:16:30.581040 1135488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:30.581050 1135488 out.go:374] Setting ErrFile to fd 2...
	I1027 22:16:30.581056 1135488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:30.581305 1135488 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:16:30.581749 1135488 out.go:368] Setting JSON to false
	I1027 22:16:30.582699 1135488 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17940,"bootTime":1761585451,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 22:16:30.582764 1135488 start.go:143] virtualization:  
	I1027 22:16:30.585995 1135488 out.go:179] * [addons-789752] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:16:30.589825 1135488 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:16:30.589939 1135488 notify.go:221] Checking for updates...
	I1027 22:16:30.595479 1135488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:16:30.598306 1135488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:16:30.601119 1135488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 22:16:30.604025 1135488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:16:30.606930 1135488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:16:30.610045 1135488 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:16:30.632067 1135488 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:16:30.632188 1135488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:30.684904 1135488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 22:16:30.67621124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:30.685011 1135488 docker.go:318] overlay module found
	I1027 22:16:30.688051 1135488 out.go:179] * Using the docker driver based on user configuration
	I1027 22:16:30.690921 1135488 start.go:307] selected driver: docker
	I1027 22:16:30.690942 1135488 start.go:928] validating driver "docker" against <nil>
	I1027 22:16:30.690965 1135488 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:16:30.691699 1135488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:30.752785 1135488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 22:16:30.743447368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:30.752944 1135488 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:16:30.753186 1135488 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:16:30.756187 1135488 out.go:179] * Using Docker driver with root privileges
	I1027 22:16:30.758950 1135488 cni.go:84] Creating CNI manager for ""
	I1027 22:16:30.759035 1135488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:16:30.759050 1135488 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:16:30.759132 1135488 start.go:351] cluster config:
	{Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 22:16:30.762177 1135488 out.go:179] * Starting "addons-789752" primary control-plane node in "addons-789752" cluster
	I1027 22:16:30.765068 1135488 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:16:30.768066 1135488 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:16:30.770914 1135488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:16:30.770985 1135488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 22:16:30.770998 1135488 cache.go:59] Caching tarball of preloaded images
	I1027 22:16:30.770997 1135488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:16:30.771091 1135488 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 22:16:30.771101 1135488 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:16:30.771440 1135488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/config.json ...
	I1027 22:16:30.771470 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/config.json: {Name:mke88408baa530750bd9d1795792eabe215b0eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:16:30.787525 1135488 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:16:30.787669 1135488 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 22:16:30.787695 1135488 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 22:16:30.787701 1135488 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 22:16:30.787713 1135488 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 22:16:30.787723 1135488 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 22:16:48.564650 1135488 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 22:16:48.564693 1135488 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:16:48.564736 1135488 start.go:360] acquireMachinesLock for addons-789752: {Name:mka636defb696345efb99c891c420d0f693c9864 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:16:48.565511 1135488 start.go:364] duration metric: took 748.088µs to acquireMachinesLock for "addons-789752"
	I1027 22:16:48.565552 1135488 start.go:93] Provisioning new machine with config: &{Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:16:48.565634 1135488 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:16:48.569008 1135488 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 22:16:48.569236 1135488 start.go:159] libmachine.API.Create for "addons-789752" (driver="docker")
	I1027 22:16:48.569271 1135488 client.go:173] LocalClient.Create starting
	I1027 22:16:48.569397 1135488 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem
	I1027 22:16:49.384714 1135488 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem
	I1027 22:16:49.775495 1135488 cli_runner.go:164] Run: docker network inspect addons-789752 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:16:49.792236 1135488 cli_runner.go:211] docker network inspect addons-789752 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:16:49.792319 1135488 network_create.go:284] running [docker network inspect addons-789752] to gather additional debugging logs...
	I1027 22:16:49.792340 1135488 cli_runner.go:164] Run: docker network inspect addons-789752
	W1027 22:16:49.808583 1135488 cli_runner.go:211] docker network inspect addons-789752 returned with exit code 1
	I1027 22:16:49.808614 1135488 network_create.go:287] error running [docker network inspect addons-789752]: docker network inspect addons-789752: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-789752 not found
	I1027 22:16:49.808627 1135488 network_create.go:289] output of [docker network inspect addons-789752]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-789752 not found
	
	** /stderr **
	I1027 22:16:49.808724 1135488 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:16:49.824904 1135488 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c80640}
	I1027 22:16:49.824949 1135488 network_create.go:124] attempt to create docker network addons-789752 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 22:16:49.825004 1135488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-789752 addons-789752
	I1027 22:16:49.883599 1135488 network_create.go:108] docker network addons-789752 192.168.49.0/24 created
	I1027 22:16:49.883641 1135488 kic.go:121] calculated static IP "192.168.49.2" for the "addons-789752" container
	I1027 22:16:49.883736 1135488 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:16:49.898989 1135488 cli_runner.go:164] Run: docker volume create addons-789752 --label name.minikube.sigs.k8s.io=addons-789752 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:16:49.915708 1135488 oci.go:103] Successfully created a docker volume addons-789752
	I1027 22:16:49.915807 1135488 cli_runner.go:164] Run: docker run --rm --name addons-789752-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789752 --entrypoint /usr/bin/test -v addons-789752:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:16:51.548725 1135488 cli_runner.go:217] Completed: docker run --rm --name addons-789752-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789752 --entrypoint /usr/bin/test -v addons-789752:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.632875871s)
	I1027 22:16:51.548759 1135488 oci.go:107] Successfully prepared a docker volume addons-789752
	I1027 22:16:51.548795 1135488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:16:51.548818 1135488 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:16:51.548883 1135488 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-789752:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:16:56.202232 1135488 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-789752:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.653298357s)
	I1027 22:16:56.202266 1135488 kic.go:203] duration metric: took 4.653444493s to extract preloaded images to volume ...
	W1027 22:16:56.202409 1135488 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 22:16:56.202533 1135488 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:16:56.261779 1135488 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-789752 --name addons-789752 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789752 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-789752 --network addons-789752 --ip 192.168.49.2 --volume addons-789752:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:16:56.572116 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Running}}
	I1027 22:16:56.592646 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:16:56.614601 1135488 cli_runner.go:164] Run: docker exec addons-789752 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:16:56.660529 1135488 oci.go:144] the created container "addons-789752" has a running status.
	I1027 22:16:56.660563 1135488 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa...
	I1027 22:16:57.111763 1135488 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:16:57.131337 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:16:57.148556 1135488 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:16:57.148575 1135488 kic_runner.go:114] Args: [docker exec --privileged addons-789752 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:16:57.188824 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:16:57.205446 1135488 machine.go:94] provisionDockerMachine start ...
	I1027 22:16:57.205585 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:16:57.222727 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:16:57.223066 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:16:57.223082 1135488 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:16:57.223721 1135488 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 22:17:00.477344 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-789752
	
	I1027 22:17:00.477421 1135488 ubuntu.go:182] provisioning hostname "addons-789752"
	I1027 22:17:00.477521 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:00.509330 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:17:00.509682 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:17:00.509695 1135488 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-789752 && echo "addons-789752" | sudo tee /etc/hostname
	I1027 22:17:00.680350 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-789752
	
	I1027 22:17:00.680450 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:00.700903 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:17:00.701236 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:17:00.701252 1135488 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-789752' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-789752/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-789752' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:17:00.850894 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:17:00.850924 1135488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 22:17:00.850944 1135488 ubuntu.go:190] setting up certificates
	I1027 22:17:00.850954 1135488 provision.go:84] configureAuth start
	I1027 22:17:00.851019 1135488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789752
	I1027 22:17:00.869776 1135488 provision.go:143] copyHostCerts
	I1027 22:17:00.869866 1135488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 22:17:00.870002 1135488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 22:17:00.870065 1135488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 22:17:00.870120 1135488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.addons-789752 san=[127.0.0.1 192.168.49.2 addons-789752 localhost minikube]
	I1027 22:17:00.959965 1135488 provision.go:177] copyRemoteCerts
	I1027 22:17:00.960040 1135488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:17:00.960080 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:00.977718 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.083033 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 22:17:01.103119 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 22:17:01.122559 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:17:01.143200 1135488 provision.go:87] duration metric: took 292.220535ms to configureAuth
	I1027 22:17:01.143226 1135488 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:17:01.143432 1135488 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:17:01.143535 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.163024 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:17:01.163387 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:17:01.163410 1135488 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:17:01.432338 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:17:01.432358 1135488 machine.go:97] duration metric: took 4.226889139s to provisionDockerMachine
	I1027 22:17:01.432369 1135488 client.go:176] duration metric: took 12.863090915s to LocalClient.Create
	I1027 22:17:01.432382 1135488 start.go:167] duration metric: took 12.863147105s to libmachine.API.Create "addons-789752"
	I1027 22:17:01.432390 1135488 start.go:293] postStartSetup for "addons-789752" (driver="docker")
	I1027 22:17:01.432399 1135488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:17:01.432475 1135488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:17:01.432514 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.459297 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.567203 1135488 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:17:01.570847 1135488 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:17:01.570882 1135488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:17:01.570896 1135488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 22:17:01.571018 1135488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 22:17:01.571065 1135488 start.go:296] duration metric: took 138.668869ms for postStartSetup
	I1027 22:17:01.571477 1135488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789752
	I1027 22:17:01.589818 1135488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/config.json ...
	I1027 22:17:01.590153 1135488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:17:01.590201 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.608387 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.712289 1135488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:17:01.717212 1135488 start.go:128] duration metric: took 13.15155971s to createHost
	I1027 22:17:01.717241 1135488 start.go:83] releasing machines lock for "addons-789752", held for 13.151711572s
	I1027 22:17:01.717312 1135488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789752
	I1027 22:17:01.734454 1135488 ssh_runner.go:195] Run: cat /version.json
	I1027 22:17:01.734526 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.734591 1135488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:17:01.734673 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.756293 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.760687 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.955021 1135488 ssh_runner.go:195] Run: systemctl --version
	I1027 22:17:01.961591 1135488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:17:01.999599 1135488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:17:02.005063 1135488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:17:02.005221 1135488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:17:02.039316 1135488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 22:17:02.039394 1135488 start.go:496] detecting cgroup driver to use...
	I1027 22:17:02.039443 1135488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 22:17:02.039532 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:17:02.058293 1135488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:17:02.072095 1135488 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:17:02.072211 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:17:02.091763 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:17:02.112262 1135488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:17:02.246849 1135488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:17:02.387792 1135488 docker.go:234] disabling docker service ...
	I1027 22:17:02.387950 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:17:02.414740 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:17:02.428728 1135488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:17:02.550070 1135488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:17:02.673547 1135488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:17:02.687693 1135488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:17:02.703825 1135488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:17:02.703902 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.713391 1135488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 22:17:02.713513 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.723245 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.732494 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.741683 1135488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:17:02.750467 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.759664 1135488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.774315 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.783707 1135488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:17:02.792777 1135488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:17:02.800695 1135488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:17:02.915786 1135488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:17:03.051340 1135488 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:17:03.051461 1135488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:17:03.055943 1135488 start.go:564] Will wait 60s for crictl version
	I1027 22:17:03.056032 1135488 ssh_runner.go:195] Run: which crictl
	I1027 22:17:03.060404 1135488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:17:03.088948 1135488 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:17:03.089108 1135488 ssh_runner.go:195] Run: crio --version
	I1027 22:17:03.120282 1135488 ssh_runner.go:195] Run: crio --version
	I1027 22:17:03.152297 1135488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:17:03.155230 1135488 cli_runner.go:164] Run: docker network inspect addons-789752 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:17:03.172436 1135488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 22:17:03.176766 1135488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:17:03.188132 1135488 kubeadm.go:884] updating cluster {Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:17:03.188255 1135488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:17:03.188318 1135488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:17:03.226466 1135488 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:17:03.226497 1135488 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:17:03.226556 1135488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:17:03.252494 1135488 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:17:03.252520 1135488 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:17:03.252529 1135488 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 22:17:03.252618 1135488 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-789752 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:17:03.252714 1135488 ssh_runner.go:195] Run: crio config
	I1027 22:17:03.307838 1135488 cni.go:84] Creating CNI manager for ""
	I1027 22:17:03.307908 1135488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:17:03.307956 1135488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:17:03.308009 1135488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-789752 NodeName:addons-789752 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:17:03.308185 1135488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-789752"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:17:03.308278 1135488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:17:03.316737 1135488 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:17:03.316816 1135488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:17:03.325120 1135488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 22:17:03.338361 1135488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:17:03.351971 1135488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 22:17:03.365613 1135488 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:17:03.369479 1135488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:17:03.380402 1135488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:17:03.496853 1135488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:17:03.514973 1135488 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752 for IP: 192.168.49.2
	I1027 22:17:03.515004 1135488 certs.go:195] generating shared ca certs ...
	I1027 22:17:03.515035 1135488 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:03.515207 1135488 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 22:17:04.092899 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt ...
	I1027 22:17:04.092935 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt: {Name:mk3d1ca9953d79b82e69ddd2b9bf0e1e9d4fc081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:04.093759 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key ...
	I1027 22:17:04.093779 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key: {Name:mk37097ff8d48d4c2d9e5dcc3749355e59f34b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:04.093872 1135488 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 22:17:05.073432 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt ...
	I1027 22:17:05.073468 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt: {Name:mkc6c7fe2cd51ad060e70d00520c07e6b8c3502c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.073671 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key ...
	I1027 22:17:05.073686 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key: {Name:mk769d80a91bd0cfa1b5e6c741e3a5507bd17b68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.073776 1135488 certs.go:257] generating profile certs ...
	I1027 22:17:05.073835 1135488 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.key
	I1027 22:17:05.073854 1135488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt with IP's: []
	I1027 22:17:05.320429 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt ...
	I1027 22:17:05.320475 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: {Name:mk9700168e780e5824228759b3d5fa3c0e849cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.320673 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.key ...
	I1027 22:17:05.320686 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.key: {Name:mk1f49e59a2be1496fcf09d2ca87a4f86d10357e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.320780 1135488 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92
	I1027 22:17:05.320800 1135488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 22:17:06.349038 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92 ...
	I1027 22:17:06.349073 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92: {Name:mk6add6615215d0c06da589649660f246c0aa3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.349907 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92 ...
	I1027 22:17:06.349926 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92: {Name:mk213c3bbf37fb1f4c149ede56d89ea432225480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.350015 1135488 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt
	I1027 22:17:06.350109 1135488 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key
	I1027 22:17:06.350166 1135488 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key
	I1027 22:17:06.350187 1135488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt with IP's: []
	I1027 22:17:06.752934 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt ...
	I1027 22:17:06.752972 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt: {Name:mk70691924aeec9f578f4353fa8dfa906deb8f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.753174 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key ...
	I1027 22:17:06.753196 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key: {Name:mk0c09fe9610f9d81659d14ba30d07312ecd3100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.753413 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:17:06.753455 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 22:17:06.753484 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:17:06.753511 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 22:17:06.754065 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:17:06.773288 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 22:17:06.793546 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:17:06.813000 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:17:06.832014 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:17:06.851545 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:17:06.871465 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:17:06.889725 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:17:06.908761 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:17:06.928158 1135488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:17:06.941863 1135488 ssh_runner.go:195] Run: openssl version
	I1027 22:17:06.948517 1135488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:17:06.957282 1135488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:17:06.961320 1135488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:17:06.961430 1135488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:17:07.009238 1135488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:17:07.018240 1135488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:17:07.021837 1135488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:17:07.021885 1135488 kubeadm.go:401] StartCluster: {Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:17:07.021974 1135488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:17:07.022034 1135488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:17:07.050835 1135488 cri.go:89] found id: ""
	I1027 22:17:07.050923 1135488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:17:07.059062 1135488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:17:07.067179 1135488 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:17:07.067249 1135488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:17:07.076034 1135488 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:17:07.076074 1135488 kubeadm.go:158] found existing configuration files:
	
	I1027 22:17:07.076131 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:17:07.084492 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:17:07.084563 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:17:07.092609 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:17:07.101343 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:17:07.101536 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:17:07.109643 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:17:07.117976 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:17:07.118046 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:17:07.125662 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:17:07.135490 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:17:07.135629 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:17:07.143353 1135488 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:17:07.198083 1135488 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:17:07.199256 1135488 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:17:07.240585 1135488 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:17:07.240661 1135488 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 22:17:07.240709 1135488 kubeadm.go:319] OS: Linux
	I1027 22:17:07.240764 1135488 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:17:07.240819 1135488 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 22:17:07.240872 1135488 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:17:07.240927 1135488 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:17:07.240982 1135488 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:17:07.241034 1135488 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:17:07.241086 1135488 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:17:07.241142 1135488 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:17:07.241196 1135488 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 22:17:07.311967 1135488 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:17:07.312098 1135488 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:17:07.312200 1135488 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:17:07.322905 1135488 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:17:07.329092 1135488 out.go:252]   - Generating certificates and keys ...
	I1027 22:17:07.329281 1135488 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:17:07.329416 1135488 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:17:08.892933 1135488 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:17:09.406541 1135488 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:17:10.738372 1135488 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:17:11.101915 1135488 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:17:11.593193 1135488 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:17:11.593553 1135488 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-789752 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 22:17:12.332425 1135488 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:17:12.332784 1135488 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-789752 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 22:17:12.506737 1135488 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:17:13.577590 1135488 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:17:14.609769 1135488 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:17:14.610070 1135488 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:17:15.043944 1135488 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:17:16.209102 1135488 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:17:16.344266 1135488 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:17:16.899742 1135488 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:17:18.627074 1135488 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:17:18.628223 1135488 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:17:18.631290 1135488 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:17:18.634739 1135488 out.go:252]   - Booting up control plane ...
	I1027 22:17:18.634843 1135488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:17:18.634925 1135488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:17:18.636136 1135488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:17:18.652465 1135488 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:17:18.652821 1135488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:17:18.660208 1135488 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:17:18.660565 1135488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:17:18.660804 1135488 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:17:18.787766 1135488 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:17:18.787891 1135488 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:17:19.788564 1135488 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000891412s
	I1027 22:17:19.792584 1135488 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:17:19.792688 1135488 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 22:17:19.793019 1135488 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:17:19.793110 1135488 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:17:22.595821 1135488 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.802790665s
	I1027 22:17:23.693531 1135488 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.900924176s
	I1027 22:17:25.795276 1135488 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002582828s
	I1027 22:17:25.814592 1135488 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:17:25.834186 1135488 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:17:25.847886 1135488 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:17:25.848113 1135488 kubeadm.go:319] [mark-control-plane] Marking the node addons-789752 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:17:25.861453 1135488 kubeadm.go:319] [bootstrap-token] Using token: yt42fj.1l94hwf0zkgx61b4
	I1027 22:17:25.864650 1135488 out.go:252]   - Configuring RBAC rules ...
	I1027 22:17:25.864800 1135488 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:17:25.869081 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:17:25.877392 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:17:25.883954 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:17:25.888031 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:17:25.892008 1135488 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:17:26.202130 1135488 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:17:26.630775 1135488 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:17:27.202687 1135488 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:17:27.203910 1135488 kubeadm.go:319] 
	I1027 22:17:27.203996 1135488 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:17:27.204025 1135488 kubeadm.go:319] 
	I1027 22:17:27.204112 1135488 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:17:27.204121 1135488 kubeadm.go:319] 
	I1027 22:17:27.204149 1135488 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:17:27.204216 1135488 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:17:27.204273 1135488 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:17:27.204282 1135488 kubeadm.go:319] 
	I1027 22:17:27.204339 1135488 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:17:27.204348 1135488 kubeadm.go:319] 
	I1027 22:17:27.204399 1135488 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:17:27.204407 1135488 kubeadm.go:319] 
	I1027 22:17:27.204463 1135488 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:17:27.204547 1135488 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:17:27.204623 1135488 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:17:27.204632 1135488 kubeadm.go:319] 
	I1027 22:17:27.204721 1135488 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:17:27.204809 1135488 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:17:27.204818 1135488 kubeadm.go:319] 
	I1027 22:17:27.204906 1135488 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yt42fj.1l94hwf0zkgx61b4 \
	I1027 22:17:27.205019 1135488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 22:17:27.205043 1135488 kubeadm.go:319] 	--control-plane 
	I1027 22:17:27.205051 1135488 kubeadm.go:319] 
	I1027 22:17:27.205141 1135488 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:17:27.205150 1135488 kubeadm.go:319] 
	I1027 22:17:27.205236 1135488 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yt42fj.1l94hwf0zkgx61b4 \
	I1027 22:17:27.205353 1135488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 22:17:27.209538 1135488 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 22:17:27.209774 1135488 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 22:17:27.209890 1135488 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:17:27.209910 1135488 cni.go:84] Creating CNI manager for ""
	I1027 22:17:27.209923 1135488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:17:27.213098 1135488 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:17:27.215846 1135488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:17:27.219873 1135488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:17:27.219897 1135488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:17:27.233096 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:17:27.528495 1135488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:17:27.528633 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:27.528740 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-789752 minikube.k8s.io/updated_at=2025_10_27T22_17_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=addons-789752 minikube.k8s.io/primary=true
	I1027 22:17:27.679149 1135488 ops.go:34] apiserver oom_adj: -16
	I1027 22:17:27.679256 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:28.179980 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:28.679634 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:29.180264 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:29.680127 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:30.179664 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:30.679590 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:31.180098 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:31.679963 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:31.832844 1135488 kubeadm.go:1114] duration metric: took 4.304251448s to wait for elevateKubeSystemPrivileges
	I1027 22:17:31.832881 1135488 kubeadm.go:403] duration metric: took 24.810998943s to StartCluster
	I1027 22:17:31.832899 1135488 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:31.833657 1135488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:17:31.834059 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:31.834278 1135488 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:17:31.834436 1135488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 22:17:31.834708 1135488 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:17:31.834746 1135488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 22:17:31.834831 1135488 addons.go:69] Setting yakd=true in profile "addons-789752"
	I1027 22:17:31.834851 1135488 addons.go:238] Setting addon yakd=true in "addons-789752"
	I1027 22:17:31.834873 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.835358 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.835804 1135488 addons.go:69] Setting inspektor-gadget=true in profile "addons-789752"
	I1027 22:17:31.835827 1135488 addons.go:238] Setting addon inspektor-gadget=true in "addons-789752"
	I1027 22:17:31.835851 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.836275 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.836418 1135488 addons.go:69] Setting metrics-server=true in profile "addons-789752"
	I1027 22:17:31.836448 1135488 addons.go:238] Setting addon metrics-server=true in "addons-789752"
	I1027 22:17:31.836506 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.836928 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.838593 1135488 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-789752"
	I1027 22:17:31.838625 1135488 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-789752"
	I1027 22:17:31.838662 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.839109 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.839547 1135488 addons.go:69] Setting registry=true in profile "addons-789752"
	I1027 22:17:31.839575 1135488 addons.go:238] Setting addon registry=true in "addons-789752"
	I1027 22:17:31.839601 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.840036 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.855237 1135488 addons.go:69] Setting registry-creds=true in profile "addons-789752"
	I1027 22:17:31.855273 1135488 addons.go:238] Setting addon registry-creds=true in "addons-789752"
	I1027 22:17:31.855309 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.855770 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.857893 1135488 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-789752"
	I1027 22:17:31.857928 1135488 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-789752"
	I1027 22:17:31.857962 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.858448 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.871265 1135488 addons.go:69] Setting cloud-spanner=true in profile "addons-789752"
	I1027 22:17:31.871315 1135488 addons.go:238] Setting addon cloud-spanner=true in "addons-789752"
	I1027 22:17:31.871350 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.871810 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.884412 1135488 addons.go:69] Setting storage-provisioner=true in profile "addons-789752"
	I1027 22:17:31.884453 1135488 addons.go:238] Setting addon storage-provisioner=true in "addons-789752"
	I1027 22:17:31.884486 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.884974 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.888417 1135488 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-789752"
	I1027 22:17:31.888480 1135488 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-789752"
	I1027 22:17:31.888509 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.888971 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.897146 1135488 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-789752"
	I1027 22:17:31.898473 1135488 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-789752"
	I1027 22:17:31.898838 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.924133 1135488 addons.go:69] Setting volcano=true in profile "addons-789752"
	I1027 22:17:31.924168 1135488 addons.go:238] Setting addon volcano=true in "addons-789752"
	I1027 22:17:31.924204 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.924684 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.925866 1135488 addons.go:69] Setting default-storageclass=true in profile "addons-789752"
	I1027 22:17:31.925896 1135488 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-789752"
	I1027 22:17:31.926200 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.951600 1135488 addons.go:69] Setting gcp-auth=true in profile "addons-789752"
	I1027 22:17:31.951635 1135488 mustload.go:66] Loading cluster: addons-789752
	I1027 22:17:31.951848 1135488 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:17:31.952109 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.953826 1135488 addons.go:69] Setting volumesnapshots=true in profile "addons-789752"
	I1027 22:17:31.953904 1135488 addons.go:238] Setting addon volumesnapshots=true in "addons-789752"
	I1027 22:17:31.953962 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.954479 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.979363 1135488 addons.go:69] Setting ingress=true in profile "addons-789752"
	I1027 22:17:31.979398 1135488 addons.go:238] Setting addon ingress=true in "addons-789752"
	I1027 22:17:31.979454 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.979913 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.983030 1135488 out.go:179] * Verifying Kubernetes components...
	I1027 22:17:31.991563 1135488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:17:31.999781 1135488 addons.go:69] Setting ingress-dns=true in profile "addons-789752"
	I1027 22:17:31.999880 1135488 addons.go:238] Setting addon ingress-dns=true in "addons-789752"
	I1027 22:17:31.999928 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.000426 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:32.014544 1135488 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1027 22:17:32.017940 1135488 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 22:17:32.018188 1135488 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 22:17:32.018227 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 22:17:32.018476 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.018863 1135488 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 22:17:32.018250 1135488 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 22:17:32.028781 1135488 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 22:17:32.032086 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 22:17:32.032167 1135488 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 22:17:32.032271 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.052641 1135488 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 22:17:32.052720 1135488 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 22:17:32.053239 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.054163 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 22:17:32.054206 1135488 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 22:17:32.063802 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.090354 1135488 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 22:17:32.093872 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 22:17:32.098937 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 22:17:32.099045 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.102643 1135488 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 22:17:32.105322 1135488 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 22:17:32.105806 1135488 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 22:17:32.108228 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 22:17:32.108249 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 22:17:32.108311 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.119744 1135488 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 22:17:32.119811 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 22:17:32.119903 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.139338 1135488 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 22:17:32.139362 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 22:17:32.139437 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.157592 1135488 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:17:32.160862 1135488 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:17:32.160889 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:17:32.160958 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	W1027 22:17:32.179690 1135488 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 22:17:32.204186 1135488 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-789752"
	I1027 22:17:32.204230 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.204653 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:32.226365 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 22:17:32.228622 1135488 addons.go:238] Setting addon default-storageclass=true in "addons-789752"
	I1027 22:17:32.228660 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.229064 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:32.244696 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 22:17:32.251329 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 22:17:32.257003 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 22:17:32.257029 1135488 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 22:17:32.257111 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.257301 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 22:17:32.263843 1135488 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 22:17:32.266876 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 22:17:32.266989 1135488 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 22:17:32.267000 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 22:17:32.267058 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.285755 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 22:17:32.290535 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 22:17:32.294540 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 22:17:32.298519 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 22:17:32.302616 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 22:17:32.302801 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.311473 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 22:17:32.311600 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 22:17:32.315425 1135488 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 22:17:32.315448 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 22:17:32.315510 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.315721 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 22:17:32.315733 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 22:17:32.315781 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.371241 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.382446 1135488 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 22:17:32.385467 1135488 out.go:179]   - Using image docker.io/busybox:stable
	I1027 22:17:32.385609 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.388613 1135488 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 22:17:32.388637 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 22:17:32.388753 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.394709 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.420624 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.420670 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.423964 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.442680 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.478604 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.496075 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.498646 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.509923 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.535214 1135488 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:17:32.535235 1135488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:17:32.535308 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.535544 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	W1027 22:17:32.544102 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.544201 1135488 retry.go:31] will retry after 233.663306ms: ssh: handshake failed: EOF
	I1027 22:17:32.549768 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.559083 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	W1027 22:17:32.563825 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.563851 1135488 retry.go:31] will retry after 302.013161ms: ssh: handshake failed: EOF
	I1027 22:17:32.578272 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.748784 1135488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 22:17:32.749041 1135488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1027 22:17:32.779496 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.779522 1135488 retry.go:31] will retry after 194.824224ms: ssh: handshake failed: EOF
	W1027 22:17:32.868174 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.868210 1135488 retry.go:31] will retry after 439.617533ms: ssh: handshake failed: EOF
	I1027 22:17:33.043502 1135488 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:33.043531 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 22:17:33.049672 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 22:17:33.107706 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 22:17:33.107769 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 22:17:33.120976 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 22:17:33.121051 1135488 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 22:17:33.128395 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 22:17:33.143070 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 22:17:33.158857 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 22:17:33.205985 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 22:17:33.206007 1135488 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 22:17:33.222061 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:17:33.225381 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:17:33.240303 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:33.296925 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 22:17:33.296994 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 22:17:33.300261 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 22:17:33.300325 1135488 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 22:17:33.305513 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 22:17:33.311232 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 22:17:33.321494 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 22:17:33.321563 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 22:17:33.413006 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 22:17:33.413086 1135488 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 22:17:33.512101 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 22:17:33.512178 1135488 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 22:17:33.535736 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 22:17:33.535804 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 22:17:33.570249 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 22:17:33.615329 1135488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 22:17:33.615405 1135488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 22:17:33.624448 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 22:17:33.624520 1135488 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 22:17:33.731555 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 22:17:33.734905 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 22:17:33.734971 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 22:17:33.763905 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 22:17:33.763969 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 22:17:33.765939 1135488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 22:17:33.766003 1135488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 22:17:33.956042 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 22:17:33.956128 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 22:17:33.957173 1135488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 22:17:33.957222 1135488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 22:17:33.984725 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 22:17:34.102422 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 22:17:34.148787 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 22:17:34.148863 1135488 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 22:17:34.226692 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 22:17:34.226759 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 22:17:34.374655 1135488 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 22:17:34.374727 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 22:17:34.413208 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 22:17:34.413274 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 22:17:34.572422 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 22:17:34.572498 1135488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 22:17:34.584287 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 22:17:34.784913 1135488 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.035809863s)
	I1027 22:17:34.784993 1135488 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.036139476s)
	I1027 22:17:34.785076 1135488 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 22:17:34.786695 1135488 node_ready.go:35] waiting up to 6m0s for node "addons-789752" to be "Ready" ...
	I1027 22:17:34.840601 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 22:17:34.840666 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 22:17:34.963163 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.913450691s)
	I1027 22:17:35.104223 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 22:17:35.104543 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 22:17:35.268927 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 22:17:35.268949 1135488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 22:17:35.290907 1135488 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-789752" context rescaled to 1 replicas
	I1027 22:17:35.413332 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1027 22:17:36.823648 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:38.015395 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.886915728s)
	I1027 22:17:38.015433 1135488 addons.go:479] Verifying addon ingress=true in "addons-789752"
	I1027 22:17:38.015606 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.872457534s)
	I1027 22:17:38.015770 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.856839349s)
	I1027 22:17:38.015854 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.793776677s)
	I1027 22:17:38.015972 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.775605522s)
	W1027 22:17:38.015995 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:38.016012 1135488 retry.go:31] will retry after 259.502133ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:38.016054 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.710477319s)
	I1027 22:17:38.016099 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.704806841s)
	I1027 22:17:38.016131 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.445816965s)
	I1027 22:17:38.016144 1135488 addons.go:479] Verifying addon registry=true in "addons-789752"
	I1027 22:17:38.016231 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.790492792s)
	I1027 22:17:38.016705 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.285080204s)
	I1027 22:17:38.016734 1135488 addons.go:479] Verifying addon metrics-server=true in "addons-789752"
	I1027 22:17:38.016776 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.03197535s)
	I1027 22:17:38.016918 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.914408376s)
	I1027 22:17:38.017077 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.43270494s)
	W1027 22:17:38.017106 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 22:17:38.017122 1135488 retry.go:31] will retry after 159.932379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 22:17:38.019634 1135488 out.go:179] * Verifying registry addon...
	I1027 22:17:38.019674 1135488 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-789752 service yakd-dashboard -n yakd-dashboard
	
	I1027 22:17:38.019802 1135488 out.go:179] * Verifying ingress addon...
	I1027 22:17:38.024285 1135488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 22:17:38.025280 1135488 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 22:17:38.036245 1135488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 22:17:38.036310 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:38.037182 1135488 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 22:17:38.037200 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:38.063123 1135488 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1027 22:17:38.178039 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 22:17:38.276013 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:38.350642 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.937221549s)
	I1027 22:17:38.350676 1135488 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-789752"
	I1027 22:17:38.356981 1135488 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 22:17:38.359827 1135488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 22:17:38.370160 1135488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 22:17:38.370186 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:38.529347 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:38.530016 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:38.864394 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:39.030207 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:39.030323 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 22:17:39.289886 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:39.363795 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:39.527479 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:39.528512 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:39.863353 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:39.912705 1135488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 22:17:39.912785 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:39.930296 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:40.028217 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:40.028800 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:40.049714 1135488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 22:17:40.064703 1135488 addons.go:238] Setting addon gcp-auth=true in "addons-789752"
	I1027 22:17:40.064764 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:40.065203 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:40.082707 1135488 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 22:17:40.082773 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:40.100322 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:40.363314 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:40.527608 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:40.528526 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:40.867150 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:41.033048 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:41.033444 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:41.123706 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.945614677s)
	I1027 22:17:41.123835 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.847787093s)
	W1027 22:17:41.123857 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:41.123869 1135488 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.041124811s)
	I1027 22:17:41.123875 1135488 retry.go:31] will retry after 545.63937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:41.126991 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 22:17:41.129876 1135488 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 22:17:41.132662 1135488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 22:17:41.132690 1135488 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 22:17:41.146129 1135488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 22:17:41.146199 1135488 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 22:17:41.159674 1135488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 22:17:41.159698 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 22:17:41.173488 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1027 22:17:41.290614 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:41.365504 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:41.530205 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:41.601087 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:41.669554 1135488 addons.go:479] Verifying addon gcp-auth=true in "addons-789752"
	I1027 22:17:41.669801 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:41.672791 1135488 out.go:179] * Verifying gcp-auth addon...
	I1027 22:17:41.676424 1135488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 22:17:41.697727 1135488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 22:17:41.697798 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:41.863857 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:42.028181 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:42.029758 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:42.180707 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:42.363632 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 22:17:42.520225 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:42.520299 1135488 retry.go:31] will retry after 490.646191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:42.527418 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:42.528460 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:42.681075 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:42.862881 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:43.011229 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:43.028053 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:43.028731 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:43.180575 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:43.291250 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:43.363723 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:43.527510 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:43.530429 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:43.679890 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:43.859257 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:43.859291 1135488 retry.go:31] will retry after 729.731699ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:43.862869 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:44.029040 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:44.029304 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:44.180786 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:44.364732 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:44.527726 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:44.528839 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:44.590007 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:44.686540 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:44.863324 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:45.047005 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:45.051195 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:45.181781 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:45.292545 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:45.364825 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:45.530519 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:45.530771 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:45.535017 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:45.535054 1135488 retry.go:31] will retry after 757.383788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:45.680779 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:45.863213 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:46.027139 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:46.028535 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:46.179469 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:46.292576 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:46.366066 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:46.528948 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:46.529123 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:46.688597 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:46.863757 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:47.029552 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:47.029965 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:47.102550 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:47.102653 1135488 retry.go:31] will retry after 1.667215898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:47.179723 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:47.363092 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:47.527404 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:47.527945 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:47.682483 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:47.790200 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:47.863206 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:48.027442 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:48.028685 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:48.179806 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:48.363759 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:48.528204 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:48.528353 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:48.692219 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:48.770445 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:48.863527 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:49.029017 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:49.030296 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:49.190442 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:49.363640 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:49.528767 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:49.529387 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:49.583076 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:49.583106 1135488 retry.go:31] will retry after 3.944383448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:49.687379 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:49.862876 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:50.028975 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:50.029329 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:50.180223 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:50.290543 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:50.362693 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:50.529152 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:50.529344 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:50.680144 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:50.863105 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:51.027905 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:51.029467 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:51.179635 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:51.363531 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:51.528638 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:51.528678 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:51.685232 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:51.863324 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:52.028604 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:52.028796 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:52.179934 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:52.362735 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:52.528686 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:52.528836 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:52.685443 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:52.790346 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:52.863212 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:53.027349 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:53.028442 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:53.179830 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:53.363432 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:53.527006 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:53.528272 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:53.529834 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:53.680036 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:53.863760 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:54.029674 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:54.029903 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:54.180022 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:54.328716 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:54.328751 1135488 retry.go:31] will retry after 5.199387802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:54.363362 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:54.527619 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:54.528250 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:54.686317 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:54.863002 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:55.028845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:55.029136 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:55.179953 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:55.290121 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:55.363109 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:55.529885 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:55.530116 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:55.686240 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:55.862903 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:56.028547 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:56.028788 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:56.179872 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:56.363971 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:56.528236 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:56.528286 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:56.684572 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:56.863509 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:57.027741 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:57.029740 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:57.179657 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:57.290473 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:57.363732 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:57.527464 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:57.531193 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:57.687794 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:57.862955 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:58.028077 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:58.028711 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:58.179825 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:58.363314 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:58.527416 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:58.527870 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:58.685549 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:58.863192 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:59.027484 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:59.029312 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:59.180240 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:59.363432 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:59.527303 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:59.527933 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:59.528944 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:59.680451 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:59.790900 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:59.863472 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:00.044540 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:00.045566 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:00.201716 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:00.365245 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:00.529482 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:00.529826 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:00.543601 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.014624652s)
	W1027 22:18:00.543641 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:00.543666 1135488 retry.go:31] will retry after 6.34078197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:00.685067 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:00.862804 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:01.028219 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:01.028391 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:01.179437 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:01.363076 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:01.527751 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:01.528090 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:01.684140 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:01.862773 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:02.027759 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:02.028733 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:02.179817 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:02.289663 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:02.363586 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:02.528251 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:02.528691 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:02.685819 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:02.863812 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:03.027929 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:03.029145 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:03.180200 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:03.363149 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:03.528474 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:03.528615 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:03.685354 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:03.862815 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:04.027971 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:04.028398 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:04.179417 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:04.290218 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:04.363368 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:04.527632 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:04.528542 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:04.685507 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:04.863919 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:05.028652 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:05.028779 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:05.179998 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:05.363258 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:05.527728 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:05.528622 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:05.686029 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:05.862845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:06.028888 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:06.029345 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:06.180237 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:06.290327 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:06.363524 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:06.527542 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:06.528764 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:06.684646 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:06.863716 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:06.884639 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:07.030081 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:07.030537 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:07.179597 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:07.363659 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:07.531089 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:07.531482 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:07.693064 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:07.706848 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:07.706879 1135488 retry.go:31] will retry after 14.118883052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:07.862779 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:08.028584 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:08.029675 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:08.179307 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:08.363270 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:08.528627 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:08.528684 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:08.690921 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:08.789979 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:08.862908 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:09.027899 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:09.029190 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:09.180043 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:09.363145 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:09.528549 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:09.528569 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:09.680015 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:09.863267 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:10.027919 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:10.029374 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:10.180537 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:10.363281 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:10.527250 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:10.528325 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:10.683500 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:10.790785 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:10.863847 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:11.028845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:11.029069 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:11.180120 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:11.363354 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:11.527672 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:11.528494 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:11.680039 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:11.863198 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:12.027375 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:12.028712 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:12.179882 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:12.363026 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:12.527669 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:12.529249 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:12.689499 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:12.863486 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:13.028466 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:13.028848 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:13.202212 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:13.328692 1135488 node_ready.go:49] node "addons-789752" is "Ready"
	I1027 22:18:13.328724 1135488 node_ready.go:38] duration metric: took 38.541979638s for node "addons-789752" to be "Ready" ...
	I1027 22:18:13.328739 1135488 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:18:13.328823 1135488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:18:13.348158 1135488 api_server.go:72] duration metric: took 41.513838898s to wait for apiserver process to appear ...
	I1027 22:18:13.348184 1135488 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:18:13.348205 1135488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 22:18:13.360708 1135488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 22:18:13.362757 1135488 api_server.go:141] control plane version: v1.34.1
	I1027 22:18:13.362785 1135488 api_server.go:131] duration metric: took 14.593933ms to wait for apiserver health ...
	I1027 22:18:13.362797 1135488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:18:13.381529 1135488 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 22:18:13.381604 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:13.382548 1135488 system_pods.go:59] 19 kube-system pods found
	I1027 22:18:13.382646 1135488 system_pods.go:61] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:13.382674 1135488 system_pods.go:61] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending
	I1027 22:18:13.382696 1135488 system_pods.go:61] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending
	I1027 22:18:13.382730 1135488 system_pods.go:61] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending
	I1027 22:18:13.382755 1135488 system_pods.go:61] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:13.382774 1135488 system_pods.go:61] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:13.382810 1135488 system_pods.go:61] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:13.382834 1135488 system_pods.go:61] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:13.382857 1135488 system_pods.go:61] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending
	I1027 22:18:13.382893 1135488 system_pods.go:61] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:13.382918 1135488 system_pods.go:61] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:13.382941 1135488 system_pods.go:61] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:13.382980 1135488 system_pods.go:61] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending
	I1027 22:18:13.383005 1135488 system_pods.go:61] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending
	I1027 22:18:13.383026 1135488 system_pods.go:61] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending
	I1027 22:18:13.383062 1135488 system_pods.go:61] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending
	I1027 22:18:13.383086 1135488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending
	I1027 22:18:13.383104 1135488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending
	I1027 22:18:13.383138 1135488 system_pods.go:61] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending
	I1027 22:18:13.383162 1135488 system_pods.go:74] duration metric: took 20.357886ms to wait for pod list to return data ...
	I1027 22:18:13.383184 1135488 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:18:13.394555 1135488 default_sa.go:45] found service account: "default"
	I1027 22:18:13.394635 1135488 default_sa.go:55] duration metric: took 11.431042ms for default service account to be created ...
	I1027 22:18:13.394675 1135488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:18:13.404380 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:13.404474 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:13.404494 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending
	I1027 22:18:13.404515 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending
	I1027 22:18:13.404550 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending
	I1027 22:18:13.404569 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:13.404589 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:13.404610 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:13.404644 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:13.404663 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending
	I1027 22:18:13.404680 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:13.404713 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:13.404739 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:13.404768 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending
	I1027 22:18:13.404806 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending
	I1027 22:18:13.404830 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:13.404850 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending
	I1027 22:18:13.404884 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending
	I1027 22:18:13.404907 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending
	I1027 22:18:13.404924 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending
	I1027 22:18:13.404968 1135488 retry.go:31] will retry after 228.262723ms: missing components: kube-dns
	I1027 22:18:13.573100 1135488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 22:18:13.573175 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:13.574590 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:13.682359 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:13.682488 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:13.682511 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending
	I1027 22:18:13.682559 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending
	I1027 22:18:13.682645 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending
	I1027 22:18:13.682674 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:13.682714 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:13.682737 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:13.682756 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:13.682799 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 22:18:13.682820 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:13.682840 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:13.682879 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:13.682908 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending
	I1027 22:18:13.682933 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 22:18:13.682974 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:13.682997 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending
	I1027 22:18:13.683029 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending
	I1027 22:18:13.683058 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:13.683083 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:18:13.683129 1135488 retry.go:31] will retry after 357.428943ms: missing components: kube-dns
	I1027 22:18:13.720423 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:13.870334 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:14.058646 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:14.069720 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:14.073072 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:14.073195 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:14.073256 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 22:18:14.073293 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 22:18:14.073315 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 22:18:14.073354 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:14.073401 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:14.073464 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:14.073484 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:14.073525 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 22:18:14.073547 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:14.073574 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:14.073610 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:14.073656 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 22:18:14.073707 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 22:18:14.073747 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:14.073791 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 22:18:14.073827 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.073874 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.073910 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:18:14.073963 1135488 retry.go:31] will retry after 331.542918ms: missing components: kube-dns
	I1027 22:18:14.185307 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:14.364262 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:14.422681 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:14.422731 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:14.422743 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 22:18:14.422752 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 22:18:14.422759 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 22:18:14.422768 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:14.422774 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:14.422784 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:14.422789 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:14.422805 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 22:18:14.422814 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:14.422824 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:14.422831 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:14.422851 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 22:18:14.422857 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 22:18:14.422864 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:14.422877 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 22:18:14.422897 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.422914 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.422929 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:18:14.422939 1135488 system_pods.go:126] duration metric: took 1.028227355s to wait for k8s-apps to be running ...
	I1027 22:18:14.422959 1135488 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:18:14.423018 1135488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:18:14.445065 1135488 system_svc.go:56] duration metric: took 22.096338ms WaitForService to wait for kubelet
	I1027 22:18:14.445096 1135488 kubeadm.go:587] duration metric: took 42.610782527s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:18:14.445116 1135488 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:18:14.506042 1135488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 22:18:14.506080 1135488 node_conditions.go:123] node cpu capacity is 2
	I1027 22:18:14.506095 1135488 node_conditions.go:105] duration metric: took 60.973225ms to run NodePressure ...
	I1027 22:18:14.506108 1135488 start.go:242] waiting for startup goroutines ...
	I1027 22:18:14.542291 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:14.543618 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:14.679914 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:14.873319 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:15.033178 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:15.033952 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:15.180768 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:15.363562 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:15.530457 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:15.530954 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:15.696674 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:15.864878 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:16.028856 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:16.029060 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:16.180537 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:16.364962 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:16.529286 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:16.529492 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:16.685502 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:16.864380 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:17.029848 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:17.030196 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:17.180682 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:17.363799 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:17.529853 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:17.529990 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:17.683992 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:17.863740 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:18.030898 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:18.031317 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:18.180113 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:18.364576 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:18.528963 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:18.529728 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:18.685621 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:18.864088 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:19.029357 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:19.029514 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:19.179638 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:19.363817 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:19.530181 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:19.530954 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:19.686795 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:19.863094 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:20.029327 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:20.030278 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:20.182147 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:20.364724 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:20.531313 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:20.532382 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:20.694656 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:20.865639 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:21.036377 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:21.036797 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:21.181132 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:21.364690 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:21.530633 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:21.531008 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:21.689301 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:21.826589 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:21.865134 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:22.029543 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:22.030681 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:22.180053 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:22.364584 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:22.529201 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:22.530337 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:22.685896 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:22.828905 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002276164s)
	W1027 22:18:22.828943 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:22.828968 1135488 retry.go:31] will retry after 19.616702267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:22.862973 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:23.028799 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:23.028920 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:23.179738 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:23.364164 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:23.528869 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:23.529280 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:23.682301 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:23.863717 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:24.030206 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:24.030309 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:24.181021 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:24.364430 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:24.531207 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:24.531724 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:24.687330 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:24.863453 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:25.030564 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:25.031050 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:25.180717 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:25.363426 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:25.529758 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:25.530150 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:25.687165 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:25.864049 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:26.029469 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:26.029668 1135488 kapi.go:107] duration metric: took 48.005386311s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 22:18:26.180113 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:26.364536 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:26.529383 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:26.679845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:26.863640 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:27.029093 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:27.180065 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:27.364011 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:27.530076 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:27.679864 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:27.865494 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:28.028960 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:28.179797 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:28.370067 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:28.529779 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:28.680956 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:28.863483 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:29.028712 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:29.180557 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:29.364651 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:29.529208 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:29.684093 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:29.863965 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:30.075718 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:30.180439 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:30.364897 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:30.529740 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:30.686973 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:30.863455 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:31.028812 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:31.179544 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:31.363874 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:31.529122 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:31.685777 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:31.863510 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:32.028579 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:32.179650 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:32.364049 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:32.529902 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:32.683328 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:32.864410 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:33.029007 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:33.180494 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:33.364097 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:33.529697 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:33.687372 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:33.864379 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:34.028866 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:34.182898 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:34.363653 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:34.529330 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:34.686497 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:34.863784 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:35.030044 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:35.180829 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:35.364245 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:35.529518 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:35.684916 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:35.864572 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:36.030017 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:36.180460 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:36.364822 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:36.530020 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:36.689993 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:36.863757 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:37.030357 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:37.180041 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:37.363790 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:37.529663 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:37.687786 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:37.864164 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:38.030278 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:38.181281 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:38.363711 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:38.530063 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:38.686853 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:38.864470 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:39.029394 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:39.180796 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:39.363566 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:39.528363 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:39.689486 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:39.863417 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:40.045133 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:40.190850 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:40.363275 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:40.529572 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:40.687952 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:40.863119 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:41.030088 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:41.184605 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:41.364428 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:41.529995 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:41.687131 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:41.863688 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:42.033241 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:42.186371 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:42.365098 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:42.446421 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:42.531938 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:42.685980 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:42.863840 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:43.029631 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:43.180190 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:43.363712 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:43.529376 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:43.565802 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.119342126s)
	W1027 22:18:43.565836 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:43.565856 1135488 retry.go:31] will retry after 13.97949312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:43.684789 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:43.862872 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:44.030180 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:44.180522 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:44.363724 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:44.529420 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:44.679491 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:44.863660 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:45.029742 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:45.209618 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:45.366535 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:45.529267 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:45.684955 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:45.863661 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:46.029262 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:46.181620 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:46.363957 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:46.529770 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:46.684764 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:46.863895 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:47.029100 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:47.180295 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:47.364212 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:47.528335 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:47.679550 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:47.863521 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:48.029511 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:48.179467 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:48.364319 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:48.528573 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:48.692091 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:48.866367 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:49.029109 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:49.180467 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:49.364125 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:49.528444 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:49.685471 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:49.864426 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:50.031348 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:50.181119 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:50.365256 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:50.538219 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:50.706174 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:50.864864 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:51.029964 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:51.180615 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:51.364746 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:51.529092 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:51.685224 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:51.864053 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:52.030215 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:52.183693 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:52.363691 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:52.529192 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:52.684288 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:52.863255 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:53.029296 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:53.179834 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:53.364030 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:53.529283 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:53.684957 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:53.863023 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:54.029619 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:54.179947 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:54.363482 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:54.528795 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:54.685277 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:54.863258 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:55.028844 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:55.180431 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:55.364017 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:55.530258 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:55.688022 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:55.863887 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:56.030760 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:56.180603 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:56.368745 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:56.529907 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:56.686362 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:56.863889 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:57.028781 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:57.179435 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:57.364215 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:57.528827 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:57.546156 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:57.682931 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:57.863420 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:58.028579 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:58.179567 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:58.363773 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:58.529240 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:58.692750 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:58.697351 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.15111446s)
	W1027 22:18:58.697404 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 22:18:58.697502 1135488 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 22:18:58.864151 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:59.029627 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:59.179549 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:59.364617 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:59.529292 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:59.679624 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:59.866147 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:00.030353 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:00.182558 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:00.378682 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:00.534347 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:00.701167 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:00.868484 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:01.028841 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:01.180270 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:01.364663 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:01.529208 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:01.680639 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:01.863510 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:02.029324 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:02.179939 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:02.364026 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:02.529443 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:02.680160 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:02.864246 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:03.028869 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:03.179723 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:03.364295 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:03.529389 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:03.681209 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:03.863928 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:04.029475 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:04.179798 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:04.363401 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:04.531587 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:04.687364 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:04.864133 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:05.032271 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:05.180576 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:05.364090 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:05.529778 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:05.686604 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:05.865330 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:06.028791 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:06.188293 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:06.364710 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:06.537770 1135488 kapi.go:107] duration metric: took 1m28.512483917s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 22:19:06.688815 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:06.864079 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:07.180393 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:07.363884 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:07.682821 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:07.863632 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:08.180765 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:08.363125 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:08.687862 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:08.863556 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:09.180695 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:09.366709 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:09.682117 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:09.864250 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:10.180338 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:10.365083 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:10.680467 1135488 kapi.go:107] duration metric: took 1m29.00403785s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 22:19:10.688343 1135488 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-789752 cluster.
	I1027 22:19:10.693341 1135488 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 22:19:10.697401 1135488 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 22:19:10.864823 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:11.364022 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:11.864330 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:12.363458 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:12.863574 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:13.363674 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:13.887007 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:14.364137 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:14.863895 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:15.364049 1135488 kapi.go:107] duration metric: took 1m37.004224387s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 22:19:15.367313 1135488 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, storage-provisioner, nvidia-device-plugin, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 22:19:15.370455 1135488 addons.go:514] duration metric: took 1m43.535683543s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns storage-provisioner nvidia-device-plugin registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 22:19:15.370508 1135488 start.go:247] waiting for cluster config update ...
	I1027 22:19:15.370530 1135488 start.go:256] writing updated cluster config ...
	I1027 22:19:15.370841 1135488 ssh_runner.go:195] Run: rm -f paused
	I1027 22:19:15.375690 1135488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:19:15.379383 1135488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5586j" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.385219 1135488 pod_ready.go:94] pod "coredns-66bc5c9577-5586j" is "Ready"
	I1027 22:19:15.385250 1135488 pod_ready.go:86] duration metric: took 5.83917ms for pod "coredns-66bc5c9577-5586j" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.464770 1135488 pod_ready.go:83] waiting for pod "etcd-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.469695 1135488 pod_ready.go:94] pod "etcd-addons-789752" is "Ready"
	I1027 22:19:15.469723 1135488 pod_ready.go:86] duration metric: took 4.926985ms for pod "etcd-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.472297 1135488 pod_ready.go:83] waiting for pod "kube-apiserver-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.477163 1135488 pod_ready.go:94] pod "kube-apiserver-addons-789752" is "Ready"
	I1027 22:19:15.477191 1135488 pod_ready.go:86] duration metric: took 4.865955ms for pod "kube-apiserver-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.479717 1135488 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.780158 1135488 pod_ready.go:94] pod "kube-controller-manager-addons-789752" is "Ready"
	I1027 22:19:15.780187 1135488 pod_ready.go:86] duration metric: took 300.440656ms for pod "kube-controller-manager-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.979670 1135488 pod_ready.go:83] waiting for pod "kube-proxy-d6r65" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.379865 1135488 pod_ready.go:94] pod "kube-proxy-d6r65" is "Ready"
	I1027 22:19:16.379892 1135488 pod_ready.go:86] duration metric: took 400.191995ms for pod "kube-proxy-d6r65" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.583578 1135488 pod_ready.go:83] waiting for pod "kube-scheduler-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.981773 1135488 pod_ready.go:94] pod "kube-scheduler-addons-789752" is "Ready"
	I1027 22:19:16.981803 1135488 pod_ready.go:86] duration metric: took 398.19465ms for pod "kube-scheduler-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.981816 1135488 pod_ready.go:40] duration metric: took 1.606088683s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:19:17.046304 1135488 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 22:19:17.050035 1135488 out.go:179] * Done! kubectl is now configured to use "addons-789752" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.479870161Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wn4nv Namespace:default ID:b08b08cf49d48069b7ecda5210247c1e016828333db0122daf2dd9ef5b7570e0 UID:bda17b21-2518-450d-9588-06a8cc90b44e NetNS:/var/run/netns/b69e9037-f5da-4036-a5f3-bde90bfb50ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b38d20}] Aliases:map[]}"
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.479928468Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-wn4nv to CNI network \"kindnet\" (type=ptp)"
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.49335144Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wn4nv Namespace:default ID:b08b08cf49d48069b7ecda5210247c1e016828333db0122daf2dd9ef5b7570e0 UID:bda17b21-2518-450d-9588-06a8cc90b44e NetNS:/var/run/netns/b69e9037-f5da-4036-a5f3-bde90bfb50ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b38d20}] Aliases:map[]}"
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.493507405Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-wn4nv for CNI network kindnet (type=ptp)"
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.496849425Z" level=info msg="Ran pod sandbox b08b08cf49d48069b7ecda5210247c1e016828333db0122daf2dd9ef5b7570e0 with infra container: default/hello-world-app-5d498dc89-wn4nv/POD" id=9b6521fc-9c1f-42a6-a04c-f11f5afc1f2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.497974561Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e89a42f8-8e1f-452d-81e5-1765b766a984 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.498103317Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e89a42f8-8e1f-452d-81e5-1765b766a984 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.498141496Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=e89a42f8-8e1f-452d-81e5-1765b766a984 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.499227944Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7a8e1401-9f9f-4411-812d-d015cc81dc4c name=/runtime.v1.ImageService/PullImage
	Oct 27 22:22:25 addons-789752 crio[831]: time="2025-10-27T22:22:25.501255878Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.01449626Z" level=info msg="Removing container: 1e05cd2f8e14b25137e894ba36af280059ae01c7eb1d9f5f9d677573f641190f" id=af4f4f93-a114-4a70-88cc-e3d6796efc6c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.075797805Z" level=info msg="Error loading conmon cgroup of container 1e05cd2f8e14b25137e894ba36af280059ae01c7eb1d9f5f9d677573f641190f: cgroup deleted" id=af4f4f93-a114-4a70-88cc-e3d6796efc6c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.087569226Z" level=info msg="Removed container 1e05cd2f8e14b25137e894ba36af280059ae01c7eb1d9f5f9d677573f641190f: kube-system/registry-creds-764b6fb674-ldrtc/registry-creds" id=af4f4f93-a114-4a70-88cc-e3d6796efc6c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.152973752Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=7a8e1401-9f9f-4411-812d-d015cc81dc4c name=/runtime.v1.ImageService/PullImage
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.153777997Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2d5e86e1-afbf-4384-b8c0-1f25a88f8a50 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.159654469Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=233d64ca-1812-43ff-8755-38831ee62fb1 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.170339655Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-wn4nv/hello-world-app" id=f0ecbca8-ba97-464a-801d-e4c9da596257 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.170704518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.178073088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.179620252Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e5832eaf5eafc1ebcff092a111889e0aa043a2d3021a8241ebb2a3e909575262/merged/etc/passwd: no such file or directory"
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.179774354Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e5832eaf5eafc1ebcff092a111889e0aa043a2d3021a8241ebb2a3e909575262/merged/etc/group: no such file or directory"
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.18011182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.200725734Z" level=info msg="Created container 1695c5c3ebfbbc0d9ee419bbb3194c42bb437e35d52efcd5176a6d91d5f5f86f: default/hello-world-app-5d498dc89-wn4nv/hello-world-app" id=f0ecbca8-ba97-464a-801d-e4c9da596257 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.202605547Z" level=info msg="Starting container: 1695c5c3ebfbbc0d9ee419bbb3194c42bb437e35d52efcd5176a6d91d5f5f86f" id=bc5ae8e4-401c-4a09-b1f2-5b3633d27f26 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:22:26 addons-789752 crio[831]: time="2025-10-27T22:22:26.210274716Z" level=info msg="Started container" PID=7248 containerID=1695c5c3ebfbbc0d9ee419bbb3194c42bb437e35d52efcd5176a6d91d5f5f86f description=default/hello-world-app-5d498dc89-wn4nv/hello-world-app id=bc5ae8e4-401c-4a09-b1f2-5b3633d27f26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b08b08cf49d48069b7ecda5210247c1e016828333db0122daf2dd9ef5b7570e0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	1695c5c3ebfbb       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   b08b08cf49d48       hello-world-app-5d498dc89-wn4nv             default
	8d0d1307c27f8       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago            Exited              registry-creds                           1                   efb2d21ecc1c8       registry-creds-764b6fb674-ldrtc             kube-system
	c5f84f51b081d       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   f8c625b845f51       nginx                                       default
	0c81d6f75b203       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   d09d58c507f1a       busybox                                     default
	75710d7cc5263       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   4875b9d71c445       csi-hostpathplugin-lrbhx                    kube-system
	ba4375e556d33       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   4875b9d71c445       csi-hostpathplugin-lrbhx                    kube-system
	6360be647f550       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   4875b9d71c445       csi-hostpathplugin-lrbhx                    kube-system
	718db41ae0e01       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   4875b9d71c445       csi-hostpathplugin-lrbhx                    kube-system
	195417cf0328a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   4815d0e4143b2       gcp-auth-78565c9fb4-f79xb                   gcp-auth
	8893e0b4f4c31       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   77c82edbeee86       ingress-nginx-controller-675c5ddd98-spjc8   ingress-nginx
	5d5039ffe6c51       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   cd9437be6d469       gadget-zrlpj                                gadget
	fa9874677b5b6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   4875b9d71c445       csi-hostpathplugin-lrbhx                    kube-system
	80a5e9b22352d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   135171b8ec080       local-path-provisioner-648f6765c9-zlzmv     local-path-storage
	8503dbcc9a80b       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   efada02867106       yakd-dashboard-5ff678cb9-qpqkf              yakd-dashboard
	e49247d0ffa77       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   c29787ec0980d       kube-ingress-dns-minikube                   kube-system
	4568c459c0fc3       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    1                   5e7f3ed726e9e       ingress-nginx-admission-patch-4f5h7         ingress-nginx
	bc0685b478e5c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   96b6b40a1aea5       ingress-nginx-admission-create-gcl8s        ingress-nginx
	2a94fd6377a97       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   5ac4f79ac0376       nvidia-device-plugin-daemonset-7xjnb        kube-system
	1891841b92bc2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   4875b9d71c445       csi-hostpathplugin-lrbhx                    kube-system
	364352eda0536       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   e56fe2f16f3df       csi-hostpath-resizer-0                      kube-system
	2b141a747edd8       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   3e5c304f5bf37       snapshot-controller-7d9fbc56b8-vxkd6        kube-system
	2e03207b4b26e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   90e4f26d65f1b       csi-hostpath-attacher-0                     kube-system
	c89583e34b204       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   f93b1fc390c5d       snapshot-controller-7d9fbc56b8-dz2cc        kube-system
	9265cc16ebe00       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              4 minutes ago            Running             registry-proxy                           0                   c8c6034187aca       registry-proxy-pxgxr                        kube-system
	9872fee8e1cf9       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   1939f6fa1378e       registry-6b586f9694-vw4fc                   kube-system
	3c9c0fd6e6096       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   49a610d7de707       metrics-server-85b7d694d7-8kfjg             kube-system
	5486ddf3fbb47       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   40db2c368ef9a       cloud-spanner-emulator-86bd5cbb97-t6sc8     default
	f712dddd4573d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   30fb4e3647d6c       storage-provisioner                         kube-system
	a7d75dad24853       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   e044bc239f060       coredns-66bc5c9577-5586j                    kube-system
	bcef984a34b58       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   41514d935473d       kube-proxy-d6r65                            kube-system
	a6c04b76522e4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   919d6abb14fd9       kindnet-kn5mv                               kube-system
	f412d82dffe40       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   38e22bf0aeed7       kube-scheduler-addons-789752                kube-system
	ed5258f512747       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   5d6e656f9e772       kube-apiserver-addons-789752                kube-system
	b57e96f12e54c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   33d510217a750       kube-controller-manager-addons-789752       kube-system
	732fddf2b32de       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   8877489734e9d       etcd-addons-789752                          kube-system
	
	
	==> coredns [a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a] <==
	[INFO] 10.244.0.6:37318 - 49054 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002097087s
	[INFO] 10.244.0.6:37318 - 29486 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124654s
	[INFO] 10.244.0.6:37318 - 31037 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000087682s
	[INFO] 10.244.0.6:42690 - 52163 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155375s
	[INFO] 10.244.0.6:42690 - 51950 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089651s
	[INFO] 10.244.0.6:58048 - 49852 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008577s
	[INFO] 10.244.0.6:58048 - 49638 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089019s
	[INFO] 10.244.0.6:60493 - 3099 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144823s
	[INFO] 10.244.0.6:60493 - 2926 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00016385s
	[INFO] 10.244.0.6:33171 - 42126 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001403031s
	[INFO] 10.244.0.6:33171 - 41921 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001476591s
	[INFO] 10.244.0.6:56232 - 57673 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112338s
	[INFO] 10.244.0.6:56232 - 57517 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000151411s
	[INFO] 10.244.0.20:39574 - 64264 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197237s
	[INFO] 10.244.0.20:44757 - 39757 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000089183s
	[INFO] 10.244.0.20:57002 - 52227 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140491s
	[INFO] 10.244.0.20:57604 - 47439 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124572s
	[INFO] 10.244.0.20:43303 - 3576 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168889s
	[INFO] 10.244.0.20:38103 - 55645 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000266819s
	[INFO] 10.244.0.20:58291 - 31516 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002022846s
	[INFO] 10.244.0.20:55413 - 59573 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001893688s
	[INFO] 10.244.0.20:50236 - 43393 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004214212s
	[INFO] 10.244.0.20:39562 - 49483 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004900146s
	[INFO] 10.244.0.23:40531 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000208979s
	[INFO] 10.244.0.23:49011 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000301379s
	
	
	==> describe nodes <==
	Name:               addons-789752
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-789752
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=addons-789752
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_17_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-789752
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-789752"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:17:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-789752
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:22:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:22:13 +0000   Mon, 27 Oct 2025 22:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:22:13 +0000   Mon, 27 Oct 2025 22:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:22:13 +0000   Mon, 27 Oct 2025 22:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:22:13 +0000   Mon, 27 Oct 2025 22:18:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-789752
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                91d5fd2b-6f16-45b2-ae26-1abf741d55ae
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     cloud-spanner-emulator-86bd5cbb97-t6sc8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  default                     hello-world-app-5d498dc89-wn4nv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-zrlpj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  gcp-auth                    gcp-auth-78565c9fb4-f79xb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-spjc8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-5586j                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m56s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-lrbhx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 etcd-addons-789752                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m1s
	  kube-system                 kindnet-kn5mv                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m56s
	  kube-system                 kube-apiserver-addons-789752                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-controller-manager-addons-789752        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-d6r65                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-789752                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 metrics-server-85b7d694d7-8kfjg              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-7xjnb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 registry-6b586f9694-vw4fc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-ldrtc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-pxgxr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-dz2cc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-vxkd6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-zlzmv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qpqkf               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m54s  kube-proxy       
	  Normal   Starting                 5m1s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m1s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m1s   kubelet          Node addons-789752 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m1s   kubelet          Node addons-789752 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m1s   kubelet          Node addons-789752 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m57s  node-controller  Node addons-789752 event: Registered Node addons-789752 in Controller
	  Normal   NodeReady                4m14s  kubelet          Node addons-789752 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct27 20:54] overlayfs: idmapped layers are currently not supported
	[Oct27 20:56] overlayfs: idmapped layers are currently not supported
	[Oct27 20:57] overlayfs: idmapped layers are currently not supported
	[Oct27 20:58] overlayfs: idmapped layers are currently not supported
	[ +22.437501] overlayfs: idmapped layers are currently not supported
	[Oct27 20:59] overlayfs: idmapped layers are currently not supported
	[Oct27 21:00] overlayfs: idmapped layers are currently not supported
	[Oct27 21:01] overlayfs: idmapped layers are currently not supported
	[Oct27 21:02] overlayfs: idmapped layers are currently not supported
	[Oct27 21:03] overlayfs: idmapped layers are currently not supported
	[ +50.457876] overlayfs: idmapped layers are currently not supported
	[Oct27 21:04] overlayfs: idmapped layers are currently not supported
	[Oct27 21:05] overlayfs: idmapped layers are currently not supported
	[ +28.375154] overlayfs: idmapped layers are currently not supported
	[Oct27 21:06] overlayfs: idmapped layers are currently not supported
	[ +27.785336] overlayfs: idmapped layers are currently not supported
	[Oct27 21:07] overlayfs: idmapped layers are currently not supported
	[Oct27 21:08] overlayfs: idmapped layers are currently not supported
	[Oct27 21:09] overlayfs: idmapped layers are currently not supported
	[Oct27 21:10] overlayfs: idmapped layers are currently not supported
	[Oct27 21:11] overlayfs: idmapped layers are currently not supported
	[Oct27 21:12] overlayfs: idmapped layers are currently not supported
	[Oct27 21:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 22:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 22:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad] <==
	{"level":"warn","ts":"2025-10-27T22:17:22.316344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.351211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.395656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.415998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.446589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.474421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.503010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.543022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.566821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.603592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.614003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.630651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.668281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.680528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.703826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.742680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.756312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.775357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.863952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:38.719641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:38.745074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.638003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.658205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.683735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.712430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59780","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [195417cf0328af7821666ec831de0c1018572e8d4acab93ac2544ca2c822ce70] <==
	2025/10/27 22:19:09 GCP Auth Webhook started!
	2025/10/27 22:19:17 Ready to marshal response ...
	2025/10/27 22:19:17 Ready to write response ...
	2025/10/27 22:19:17 Ready to marshal response ...
	2025/10/27 22:19:17 Ready to write response ...
	2025/10/27 22:19:18 Ready to marshal response ...
	2025/10/27 22:19:18 Ready to write response ...
	2025/10/27 22:19:39 Ready to marshal response ...
	2025/10/27 22:19:39 Ready to write response ...
	2025/10/27 22:19:40 Ready to marshal response ...
	2025/10/27 22:19:40 Ready to write response ...
	2025/10/27 22:19:40 Ready to marshal response ...
	2025/10/27 22:19:40 Ready to write response ...
	2025/10/27 22:19:49 Ready to marshal response ...
	2025/10/27 22:19:49 Ready to write response ...
	2025/10/27 22:20:01 Ready to marshal response ...
	2025/10/27 22:20:01 Ready to write response ...
	2025/10/27 22:20:04 Ready to marshal response ...
	2025/10/27 22:20:04 Ready to write response ...
	2025/10/27 22:20:24 Ready to marshal response ...
	2025/10/27 22:20:24 Ready to write response ...
	2025/10/27 22:22:25 Ready to marshal response ...
	2025/10/27 22:22:25 Ready to write response ...
	
	
	==> kernel <==
	 22:22:27 up  5:04,  0 user,  load average: 0.72, 2.48, 3.41
	Linux addons-789752 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32] <==
	I1027 22:20:22.631232       1 main.go:301] handling current node
	I1027 22:20:32.626123       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:20:32.626155       1 main.go:301] handling current node
	I1027 22:20:42.625943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:20:42.625983       1 main.go:301] handling current node
	I1027 22:20:52.625062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:20:52.625096       1 main.go:301] handling current node
	I1027 22:21:02.633842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:21:02.633876       1 main.go:301] handling current node
	I1027 22:21:12.626007       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:21:12.626041       1 main.go:301] handling current node
	I1027 22:21:22.632885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:21:22.632918       1 main.go:301] handling current node
	I1027 22:21:32.633255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:21:32.633367       1 main.go:301] handling current node
	I1027 22:21:42.632626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:21:42.632736       1 main.go:301] handling current node
	I1027 22:21:52.632313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:21:52.632419       1 main.go:301] handling current node
	I1027 22:22:02.634465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:22:02.634579       1 main.go:301] handling current node
	I1027 22:22:12.632376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:22:12.632488       1 main.go:301] handling current node
	I1027 22:22:22.625160       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:22:22.625245       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db] <==
	E1027 22:18:22.962785       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:22.968157       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:22.989206       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.030337       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.111616       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.272594       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.594274       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.641082       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	W1027 22:18:23.961260       1 handler_proxy.go:99] no RequestInfo found in the context
	W1027 22:18:23.961265       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 22:18:23.961438       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1027 22:18:23.961457       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 22:18:23.961515       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1027 22:18:23.962686       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1027 22:18:24.345267       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 22:19:28.456497       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52830: use of closed network connection
	E1027 22:19:28.588963       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52856: use of closed network connection
	I1027 22:20:04.165236       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 22:20:04.532466       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.184.18"}
	I1027 22:20:13.824163       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1027 22:20:31.626940       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1027 22:22:25.280219       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.208.59"}
	
	
	==> kube-controller-manager [b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948] <==
	I1027 22:17:30.626136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:17:30.629086       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-789752" podCIDRs=["10.244.0.0/24"]
	I1027 22:17:30.631006       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 22:17:30.643533       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:17:30.643547       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:17:30.647781       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:17:30.658218       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:17:30.660758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:17:30.660789       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:17:30.660797       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 22:17:30.660757       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:17:30.661886       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:17:30.662095       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:17:30.663445       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:17:30.666826       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:17:30.669207       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1027 22:17:36.678778       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1027 22:18:00.631029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 22:18:00.631190       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 22:18:00.631234       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 22:18:00.657090       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 22:18:00.662245       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 22:18:00.731796       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:18:00.762962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:18:15.618511       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6] <==
	I1027 22:17:32.657406       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:17:32.800626       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:17:32.901667       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:17:32.901742       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 22:17:32.901840       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:17:32.950126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:17:32.950188       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:17:32.962060       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:17:32.962766       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:17:32.962781       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:17:32.964030       1 config.go:200] "Starting service config controller"
	I1027 22:17:32.964038       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:17:32.964055       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:17:32.964059       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:17:32.964071       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:17:32.964076       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:17:32.964680       1 config.go:309] "Starting node config controller"
	I1027 22:17:32.964686       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:17:32.964692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:17:33.064957       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:17:33.064995       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:17:33.065036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160] <==
	E1027 22:17:23.690623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:17:23.690691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:17:23.690748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:17:23.690814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:17:23.690874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:17:23.690935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:17:23.690997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:17:23.691061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:17:23.691122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:17:23.691180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:17:23.691237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:17:23.691300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:17:23.691354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:17:23.691411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:17:23.691556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:17:23.691577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:17:24.527357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:17:24.583540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:17:24.621078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:17:24.672179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:17:24.761667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:17:24.785098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:17:24.819053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:17:24.927492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1027 22:17:27.878857       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:20:31 addons-789752 kubelet[1315]: I1027 22:20:31.611941    1315 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0b40c4f5cb37d36b95ce0e9d249d81ac94761f9860a23731324b345094d4ec"} err="failed to get container status \"8f0b40c4f5cb37d36b95ce0e9d249d81ac94761f9860a23731324b345094d4ec\": rpc error: code = NotFound desc = could not find container \"8f0b40c4f5cb37d36b95ce0e9d249d81ac94761f9860a23731324b345094d4ec\": container with ID starting with 8f0b40c4f5cb37d36b95ce0e9d249d81ac94761f9860a23731324b345094d4ec not found: ID does not exist"
	Oct 27 22:20:31 addons-789752 kubelet[1315]: I1027 22:20:31.669352    1315 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6pfm\" (UniqueName: \"kubernetes.io/projected/1648de60-ce8e-4662-af13-7f7f34bf4af1-kube-api-access-t6pfm\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:20:31 addons-789752 kubelet[1315]: I1027 22:20:31.669403    1315 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-4644f958-74b3-4284-8ddc-4c4a598baae5\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^21e78f0a-b383-11f0-ac53-ae36403ce1c4\") on node \"addons-789752\" "
	Oct 27 22:20:31 addons-789752 kubelet[1315]: I1027 22:20:31.689005    1315 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-4644f958-74b3-4284-8ddc-4c4a598baae5" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^21e78f0a-b383-11f0-ac53-ae36403ce1c4") on node "addons-789752"
	Oct 27 22:20:31 addons-789752 kubelet[1315]: I1027 22:20:31.770358    1315 reconciler_common.go:299] "Volume detached for volume \"pvc-4644f958-74b3-4284-8ddc-4c4a598baae5\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^21e78f0a-b383-11f0-ac53-ae36403ce1c4\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:20:32 addons-789752 kubelet[1315]: I1027 22:20:32.574269    1315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1648de60-ce8e-4662-af13-7f7f34bf4af1" path="/var/lib/kubelet/pods/1648de60-ce8e-4662-af13-7f7f34bf4af1/volumes"
	Oct 27 22:21:05 addons-789752 kubelet[1315]: I1027 22:21:05.571947    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7xjnb" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:21:08 addons-789752 kubelet[1315]: I1027 22:21:08.571799    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-vw4fc" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:21:09 addons-789752 kubelet[1315]: I1027 22:21:09.571539    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pxgxr" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:22:13 addons-789752 kubelet[1315]: I1027 22:22:13.571475    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-vw4fc" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:22:23 addons-789752 kubelet[1315]: I1027 22:22:23.472420    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ldrtc" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:22:24 addons-789752 kubelet[1315]: I1027 22:22:24.992716    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ldrtc" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:22:24 addons-789752 kubelet[1315]: I1027 22:22:24.992778    1315 scope.go:117] "RemoveContainer" containerID="1e05cd2f8e14b25137e894ba36af280059ae01c7eb1d9f5f9d677573f641190f"
	Oct 27 22:22:25 addons-789752 kubelet[1315]: I1027 22:22:25.331280    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bda17b21-2518-450d-9588-06a8cc90b44e-gcp-creds\") pod \"hello-world-app-5d498dc89-wn4nv\" (UID: \"bda17b21-2518-450d-9588-06a8cc90b44e\") " pod="default/hello-world-app-5d498dc89-wn4nv"
	Oct 27 22:22:25 addons-789752 kubelet[1315]: I1027 22:22:25.331342    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zsd\" (UniqueName: \"kubernetes.io/projected/bda17b21-2518-450d-9588-06a8cc90b44e-kube-api-access-f6zsd\") pod \"hello-world-app-5d498dc89-wn4nv\" (UID: \"bda17b21-2518-450d-9588-06a8cc90b44e\") " pod="default/hello-world-app-5d498dc89-wn4nv"
	Oct 27 22:22:25 addons-789752 kubelet[1315]: W1027 22:22:25.496203    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/crio-b08b08cf49d48069b7ecda5210247c1e016828333db0122daf2dd9ef5b7570e0 WatchSource:0}: Error finding container b08b08cf49d48069b7ecda5210247c1e016828333db0122daf2dd9ef5b7570e0: Status 404 returned error can't find the container with id b08b08cf49d48069b7ecda5210247c1e016828333db0122daf2dd9ef5b7570e0
	Oct 27 22:22:26 addons-789752 kubelet[1315]: I1027 22:22:26.003869    1315 scope.go:117] "RemoveContainer" containerID="1e05cd2f8e14b25137e894ba36af280059ae01c7eb1d9f5f9d677573f641190f"
	Oct 27 22:22:26 addons-789752 kubelet[1315]: I1027 22:22:26.004834    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ldrtc" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:22:26 addons-789752 kubelet[1315]: I1027 22:22:26.010923    1315 scope.go:117] "RemoveContainer" containerID="8d0d1307c27f8747e3652dd08dd3e0f5160dc60e983a466be71b81e0c5853d50"
	Oct 27 22:22:26 addons-789752 kubelet[1315]: E1027 22:22:26.011179    1315 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-ldrtc_kube-system(bd101187-f370-4b46-8017-bd4f7b44959c)\"" pod="kube-system/registry-creds-764b6fb674-ldrtc" podUID="bd101187-f370-4b46-8017-bd4f7b44959c"
	Oct 27 22:22:26 addons-789752 kubelet[1315]: E1027 22:22:26.726774    1315 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9c6f2dffae3e8c44cf6026f8b15d3d10636a0ec8b7247ef0a8e8d3bb3b825381/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9c6f2dffae3e8c44cf6026f8b15d3d10636a0ec8b7247ef0a8e8d3bb3b825381/diff: no such file or directory, extraDiskErr: <nil>
	Oct 27 22:22:27 addons-789752 kubelet[1315]: I1027 22:22:27.045606    1315 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-ldrtc" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 22:22:27 addons-789752 kubelet[1315]: I1027 22:22:27.045671    1315 scope.go:117] "RemoveContainer" containerID="8d0d1307c27f8747e3652dd08dd3e0f5160dc60e983a466be71b81e0c5853d50"
	Oct 27 22:22:27 addons-789752 kubelet[1315]: E1027 22:22:27.045833    1315 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-ldrtc_kube-system(bd101187-f370-4b46-8017-bd4f7b44959c)\"" pod="kube-system/registry-creds-764b6fb674-ldrtc" podUID="bd101187-f370-4b46-8017-bd4f7b44959c"
	Oct 27 22:22:27 addons-789752 kubelet[1315]: I1027 22:22:27.095141    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-wn4nv" podStartSLOduration=1.438056867 podStartE2EDuration="2.095119447s" podCreationTimestamp="2025-10-27 22:22:25 +0000 UTC" firstStartedPulling="2025-10-27 22:22:25.498374238 +0000 UTC m=+299.051676999" lastFinishedPulling="2025-10-27 22:22:26.155436818 +0000 UTC m=+299.708739579" observedRunningTime="2025-10-27 22:22:27.094611173 +0000 UTC m=+300.647913942" watchObservedRunningTime="2025-10-27 22:22:27.095119447 +0000 UTC m=+300.648422208"
	
	
	==> storage-provisioner [f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813] <==
	W1027 22:22:03.571691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:05.575104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:05.581527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:07.584270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:07.588782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:09.592044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:09.596307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:11.598960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:11.603184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:13.606769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:13.615118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:15.619695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:15.624619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:17.628240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:17.635296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:19.638729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:19.643039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:21.645846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:21.650469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:23.653206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:23.661867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:25.665039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:25.676936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:27.695910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:22:27.705130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-789752 -n addons-789752
helpers_test.go:269: (dbg) Run:  kubectl --context addons-789752 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-789752 describe pod ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-789752 describe pod ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7: exit status 1 (116.762885ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gcl8s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4f5h7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-789752 describe pod ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (298.33494ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:22:28.615377 1145211 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:22:28.616383 1145211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:22:28.616434 1145211 out.go:374] Setting ErrFile to fd 2...
	I1027 22:22:28.616457 1145211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:22:28.616889 1145211 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:22:28.617290 1145211 mustload.go:66] Loading cluster: addons-789752
	I1027 22:22:28.617715 1145211 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:22:28.617755 1145211 addons.go:606] checking whether the cluster is paused
	I1027 22:22:28.617911 1145211 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:22:28.617945 1145211 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:22:28.619032 1145211 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:22:28.646758 1145211 ssh_runner.go:195] Run: systemctl --version
	I1027 22:22:28.646895 1145211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:22:28.665459 1145211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:22:28.777286 1145211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:22:28.777448 1145211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:22:28.814571 1145211 cri.go:89] found id: "8d0d1307c27f8747e3652dd08dd3e0f5160dc60e983a466be71b81e0c5853d50"
	I1027 22:22:28.814595 1145211 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:22:28.814600 1145211 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:22:28.814604 1145211 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:22:28.814608 1145211 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:22:28.814611 1145211 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:22:28.814615 1145211 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:22:28.814618 1145211 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:22:28.814622 1145211 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:22:28.814628 1145211 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:22:28.814632 1145211 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:22:28.814635 1145211 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:22:28.814638 1145211 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:22:28.814651 1145211 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:22:28.814655 1145211 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:22:28.814661 1145211 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:22:28.814669 1145211 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:22:28.814683 1145211 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:22:28.814687 1145211 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:22:28.814690 1145211 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:22:28.814696 1145211 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:22:28.814702 1145211 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:22:28.814707 1145211 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:22:28.814711 1145211 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:22:28.814714 1145211 cri.go:89] found id: ""
	I1027 22:22:28.814780 1145211 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:22:28.830566 1145211 out.go:203] 
	W1027 22:22:28.833578 1145211 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:22:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:22:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:22:28.833607 1145211 out.go:285] * 
	* 
	W1027 22:22:28.842445 1145211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:22:28.845536 1145211 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable ingress --alsologtostderr -v=1: exit status 11 (269.432903ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:22:28.905536 1145256 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:22:28.906806 1145256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:22:28.906850 1145256 out.go:374] Setting ErrFile to fd 2...
	I1027 22:22:28.906871 1145256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:22:28.907165 1145256 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:22:28.907508 1145256 mustload.go:66] Loading cluster: addons-789752
	I1027 22:22:28.907933 1145256 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:22:28.907979 1145256 addons.go:606] checking whether the cluster is paused
	I1027 22:22:28.908113 1145256 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:22:28.908149 1145256 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:22:28.908703 1145256 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:22:28.926847 1145256 ssh_runner.go:195] Run: systemctl --version
	I1027 22:22:28.926903 1145256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:22:28.944135 1145256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:22:29.053087 1145256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:22:29.053174 1145256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:22:29.085172 1145256 cri.go:89] found id: "8d0d1307c27f8747e3652dd08dd3e0f5160dc60e983a466be71b81e0c5853d50"
	I1027 22:22:29.085250 1145256 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:22:29.085270 1145256 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:22:29.085289 1145256 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:22:29.085324 1145256 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:22:29.085346 1145256 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:22:29.085364 1145256 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:22:29.085381 1145256 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:22:29.085412 1145256 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:22:29.085437 1145256 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:22:29.085455 1145256 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:22:29.085474 1145256 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:22:29.085505 1145256 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:22:29.085527 1145256 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:22:29.085546 1145256 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:22:29.085593 1145256 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:22:29.085646 1145256 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:22:29.085677 1145256 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:22:29.085696 1145256 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:22:29.085727 1145256 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:22:29.085755 1145256 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:22:29.085776 1145256 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:22:29.085812 1145256 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:22:29.085834 1145256 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:22:29.085852 1145256 cri.go:89] found id: ""
	I1027 22:22:29.085942 1145256 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:22:29.101994 1145256 out.go:203] 
	W1027 22:22:29.105052 1145256 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:22:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:22:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:22:29.105080 1145256 out.go:285] * 
	* 
	W1027 22:22:29.114153 1145256 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:22:29.117293 1145256 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-zrlpj" [0c2609b4-07cf-4d03-8554-35aea4a47554] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004734825s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (328.77336ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:20:03.534738 1143085 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:20:03.535951 1143085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:03.535967 1143085 out.go:374] Setting ErrFile to fd 2...
	I1027 22:20:03.535973 1143085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:03.536269 1143085 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:20:03.536580 1143085 mustload.go:66] Loading cluster: addons-789752
	I1027 22:20:03.536996 1143085 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:03.537012 1143085 addons.go:606] checking whether the cluster is paused
	I1027 22:20:03.537116 1143085 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:03.537126 1143085 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:20:03.537607 1143085 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:20:03.562350 1143085 ssh_runner.go:195] Run: systemctl --version
	I1027 22:20:03.562520 1143085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:20:03.593560 1143085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:20:03.709781 1143085 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:20:03.709885 1143085 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:20:03.745427 1143085 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:20:03.745451 1143085 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:20:03.745457 1143085 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:20:03.745460 1143085 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:20:03.745464 1143085 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:20:03.745467 1143085 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:20:03.745471 1143085 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:20:03.745474 1143085 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:20:03.745500 1143085 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:20:03.745507 1143085 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:20:03.745511 1143085 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:20:03.745514 1143085 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:20:03.745517 1143085 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:20:03.745520 1143085 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:20:03.745523 1143085 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:20:03.745528 1143085 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:20:03.745531 1143085 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:20:03.745535 1143085 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:20:03.745539 1143085 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:20:03.745542 1143085 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:20:03.745547 1143085 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:20:03.745550 1143085 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:20:03.745553 1143085 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:20:03.745555 1143085 cri.go:89] found id: ""
	I1027 22:20:03.745622 1143085 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:20:03.761832 1143085 out.go:203] 
	W1027 22:20:03.764817 1143085 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:20:03.764875 1143085 out.go:285] * 
	* 
	W1027 22:20:03.773710 1143085 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:20:03.776583 1143085 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.637734ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003667055s
addons_test.go:463: (dbg) Run:  kubectl --context addons-789752 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (269.29ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:58.228220 1142948 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:58.229316 1142948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:58.229333 1142948 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:58.229339 1142948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:58.229607 1142948 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:58.229918 1142948 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:58.230294 1142948 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:58.230313 1142948 addons.go:606] checking whether the cluster is paused
	I1027 22:19:58.230452 1142948 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:58.230469 1142948 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:58.230929 1142948 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:58.248287 1142948 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:58.248360 1142948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:58.270430 1142948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:58.373761 1142948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:58.373847 1142948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:58.406460 1142948 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:58.406522 1142948 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:58.406543 1142948 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:58.406562 1142948 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:58.406597 1142948 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:58.406621 1142948 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:58.406640 1142948 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:58.406658 1142948 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:58.406677 1142948 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:58.406708 1142948 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:58.406726 1142948 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:58.406745 1142948 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:58.406764 1142948 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:58.406790 1142948 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:58.406810 1142948 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:58.406833 1142948 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:58.406867 1142948 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:58.406887 1142948 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:58.406906 1142948 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:58.406925 1142948 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:58.406947 1142948 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:58.406964 1142948 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:58.406982 1142948 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:58.407000 1142948 cri.go:89] found id: ""
	I1027 22:19:58.407080 1142948 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:58.422957 1142948 out.go:203] 
	W1027 22:19:58.425922 1142948 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:58.425947 1142948 out.go:285] * 
	* 
	W1027 22:19:58.434883 1142948 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:58.437832 1142948 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 22:19:50.199901 1134735 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 22:19:50.204199 1134735 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 22:19:50.204226 1134735 kapi.go:107] duration metric: took 4.339211ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.348319ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-789752 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-789752 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [1a4974ba-3d81-484b-a483-e70936792ca4] Pending
helpers_test.go:352: "task-pv-pod" [1a4974ba-3d81-484b-a483-e70936792ca4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [1a4974ba-3d81-484b-a483-e70936792ca4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004394904s
addons_test.go:572: (dbg) Run:  kubectl --context addons-789752 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-789752 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-789752 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-789752 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-789752 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-789752 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-789752 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1648de60-ce8e-4662-af13-7f7f34bf4af1] Pending
helpers_test.go:352: "task-pv-pod-restore" [1648de60-ce8e-4662-af13-7f7f34bf4af1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1648de60-ce8e-4662-af13-7f7f34bf4af1] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003763242s
addons_test.go:614: (dbg) Run:  kubectl --context addons-789752 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-789752 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-789752 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (267.741246ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:20:32.084940 1143936 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:20:32.085654 1143936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.085672 1143936 out.go:374] Setting ErrFile to fd 2...
	I1027 22:20:32.085679 1143936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.085931 1143936 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:20:32.086232 1143936 mustload.go:66] Loading cluster: addons-789752
	I1027 22:20:32.086688 1143936 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:32.086710 1143936 addons.go:606] checking whether the cluster is paused
	I1027 22:20:32.086843 1143936 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:32.086860 1143936 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:20:32.087407 1143936 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:20:32.108131 1143936 ssh_runner.go:195] Run: systemctl --version
	I1027 22:20:32.108197 1143936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:20:32.128533 1143936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:20:32.234304 1143936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:20:32.234409 1143936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:20:32.268214 1143936 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:20:32.268240 1143936 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:20:32.268245 1143936 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:20:32.268250 1143936 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:20:32.268254 1143936 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:20:32.268257 1143936 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:20:32.268261 1143936 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:20:32.268264 1143936 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:20:32.268268 1143936 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:20:32.268273 1143936 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:20:32.268277 1143936 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:20:32.268281 1143936 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:20:32.268285 1143936 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:20:32.268288 1143936 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:20:32.268291 1143936 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:20:32.268297 1143936 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:20:32.268304 1143936 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:20:32.268310 1143936 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:20:32.268313 1143936 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:20:32.268316 1143936 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:20:32.268321 1143936 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:20:32.268324 1143936 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:20:32.268327 1143936 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:20:32.268331 1143936 cri.go:89] found id: ""
	I1027 22:20:32.268382 1143936 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:20:32.284207 1143936 out.go:203] 
	W1027 22:20:32.287125 1143936 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:20:32.287158 1143936 out.go:285] * 
	* 
	W1027 22:20:32.295949 1143936 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:20:32.298859 1143936 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (276.298412ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:20:32.357576 1143979 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:20:32.358885 1143979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.358903 1143979 out.go:374] Setting ErrFile to fd 2...
	I1027 22:20:32.358908 1143979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:20:32.359237 1143979 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:20:32.359544 1143979 mustload.go:66] Loading cluster: addons-789752
	I1027 22:20:32.359964 1143979 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:32.359985 1143979 addons.go:606] checking whether the cluster is paused
	I1027 22:20:32.360110 1143979 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:20:32.360129 1143979 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:20:32.360646 1143979 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:20:32.378349 1143979 ssh_runner.go:195] Run: systemctl --version
	I1027 22:20:32.378452 1143979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:20:32.400465 1143979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:20:32.505081 1143979 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:20:32.505191 1143979 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:20:32.540768 1143979 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:20:32.540791 1143979 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:20:32.540802 1143979 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:20:32.540807 1143979 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:20:32.540810 1143979 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:20:32.540814 1143979 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:20:32.540817 1143979 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:20:32.540820 1143979 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:20:32.540823 1143979 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:20:32.540829 1143979 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:20:32.540832 1143979 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:20:32.540835 1143979 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:20:32.540838 1143979 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:20:32.540842 1143979 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:20:32.540846 1143979 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:20:32.540850 1143979 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:20:32.540854 1143979 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:20:32.540858 1143979 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:20:32.540861 1143979 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:20:32.540864 1143979 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:20:32.540869 1143979 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:20:32.540876 1143979 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:20:32.540879 1143979 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:20:32.540882 1143979 cri.go:89] found id: ""
	I1027 22:20:32.540944 1143979 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:20:32.555592 1143979 out.go:203] 
	W1027 22:20:32.558571 1143979 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:20:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:20:32.558597 1143979 out.go:285] * 
	* 
	W1027 22:20:32.567410 1143979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:20:32.570342 1143979 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (42.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-789752 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-789752 --alsologtostderr -v=1: exit status 11 (334.834423ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:49.359502 1142264 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:49.360196 1142264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:49.360208 1142264 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:49.360214 1142264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:49.360606 1142264 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:49.361000 1142264 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:49.361675 1142264 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:49.361695 1142264 addons.go:606] checking whether the cluster is paused
	I1027 22:19:49.361862 1142264 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:49.361880 1142264 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:49.363192 1142264 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:49.394719 1142264 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:49.394785 1142264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:49.430523 1142264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:49.541545 1142264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:49.541645 1142264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:49.578538 1142264 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:49.578566 1142264 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:49.578571 1142264 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:49.578619 1142264 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:49.578626 1142264 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:49.578630 1142264 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:49.578633 1142264 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:49.578637 1142264 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:49.578645 1142264 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:49.578651 1142264 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:49.578654 1142264 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:49.578666 1142264 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:49.578670 1142264 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:49.578673 1142264 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:49.578676 1142264 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:49.578682 1142264 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:49.578686 1142264 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:49.578690 1142264 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:49.578693 1142264 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:49.578697 1142264 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:49.578701 1142264 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:49.578705 1142264 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:49.578715 1142264 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:49.578719 1142264 cri.go:89] found id: ""
	I1027 22:19:49.578779 1142264 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:49.596435 1142264 out.go:203] 
	W1027 22:19:49.599614 1142264 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:49.599639 1142264 out.go:285] * 
	* 
	W1027 22:19:49.618508 1142264 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:49.621554 1142264 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-789752 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-789752
helpers_test.go:243: (dbg) docker inspect addons-789752:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e",
	        "Created": "2025-10-27T22:16:56.276536241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1135892,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:16:56.341918453Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/hosts",
	        "LogPath": "/var/lib/docker/containers/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e-json.log",
	        "Name": "/addons-789752",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-789752:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-789752",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e",
	                "LowerDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f87de50b6dbb2bbfe076c22c0f2cec20f2ef1b875795166e656b44d4768fa3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-789752",
	                "Source": "/var/lib/docker/volumes/addons-789752/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-789752",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-789752",
	                "name.minikube.sigs.k8s.io": "addons-789752",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "812c284ee37f415262529cc381beeff44cbd597eca6c31c4139631dddd8e2112",
	            "SandboxKey": "/var/run/docker/netns/812c284ee37f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34244"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34245"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34248"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34246"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34247"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-789752": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:54:b4:b8:62:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31fd7d19f51759ab9eab49efa050974b3167d16e1fa33389a6c36af428254f1c",
	                    "EndpointID": "1ab705215d64bddc6a7e502cf91fd108b90ad95eeeb0e5728441f639fe128d5f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-789752",
	                        "a652b6a668fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-789752 -n addons-789752
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-789752 logs -n 25: (1.727412388s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-007224 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-007224   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ delete  │ -p download-only-007224                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-007224   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ start   │ -o=json --download-only -p download-only-798916 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-798916   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ delete  │ -p download-only-798916                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-798916   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ delete  │ -p download-only-007224                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-007224   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ delete  │ -p download-only-798916                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-798916   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ start   │ --download-only -p download-docker-332028 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-332028 │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ delete  │ -p download-docker-332028                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-332028 │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ start   │ --download-only -p binary-mirror-961152 --alsologtostderr --binary-mirror http://127.0.0.1:35369 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-961152   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ delete  │ -p binary-mirror-961152                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-961152   │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p addons-789752                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ addons  │ disable dashboard -p addons-789752                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ start   │ -p addons-789752 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:19 UTC │
	│ addons  │ addons-789752 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ ip      │ addons-789752 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │ 27 Oct 25 22:19 UTC │
	│ addons  │ addons-789752 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ ssh     │ addons-789752 ssh cat /opt/local-path-provisioner/pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │ 27 Oct 25 22:19 UTC │
	│ addons  │ addons-789752 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-789752 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	│ addons  │ addons-789752 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-789752          │ jenkins │ v1.37.0 │ 27 Oct 25 22:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:16:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:16:30.580876 1135488 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:16:30.581040 1135488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:30.581050 1135488 out.go:374] Setting ErrFile to fd 2...
	I1027 22:16:30.581056 1135488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:30.581305 1135488 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:16:30.581749 1135488 out.go:368] Setting JSON to false
	I1027 22:16:30.582699 1135488 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17940,"bootTime":1761585451,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 22:16:30.582764 1135488 start.go:143] virtualization:  
	I1027 22:16:30.585995 1135488 out.go:179] * [addons-789752] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:16:30.589825 1135488 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:16:30.589939 1135488 notify.go:221] Checking for updates...
	I1027 22:16:30.595479 1135488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:16:30.598306 1135488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:16:30.601119 1135488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 22:16:30.604025 1135488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:16:30.606930 1135488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:16:30.610045 1135488 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:16:30.632067 1135488 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:16:30.632188 1135488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:30.684904 1135488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 22:16:30.67621124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:30.685011 1135488 docker.go:318] overlay module found
	I1027 22:16:30.688051 1135488 out.go:179] * Using the docker driver based on user configuration
	I1027 22:16:30.690921 1135488 start.go:307] selected driver: docker
	I1027 22:16:30.690942 1135488 start.go:928] validating driver "docker" against <nil>
	I1027 22:16:30.690965 1135488 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:16:30.691699 1135488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:30.752785 1135488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 22:16:30.743447368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:30.752944 1135488 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:16:30.753186 1135488 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:16:30.756187 1135488 out.go:179] * Using Docker driver with root privileges
	I1027 22:16:30.758950 1135488 cni.go:84] Creating CNI manager for ""
	I1027 22:16:30.759035 1135488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:16:30.759050 1135488 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:16:30.759132 1135488 start.go:351] cluster config:
	{Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 22:16:30.762177 1135488 out.go:179] * Starting "addons-789752" primary control-plane node in "addons-789752" cluster
	I1027 22:16:30.765068 1135488 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:16:30.768066 1135488 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:16:30.770914 1135488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:16:30.770985 1135488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 22:16:30.770998 1135488 cache.go:59] Caching tarball of preloaded images
	I1027 22:16:30.770997 1135488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:16:30.771091 1135488 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 22:16:30.771101 1135488 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:16:30.771440 1135488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/config.json ...
	I1027 22:16:30.771470 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/config.json: {Name:mke88408baa530750bd9d1795792eabe215b0eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:16:30.787525 1135488 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:16:30.787669 1135488 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 22:16:30.787695 1135488 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 22:16:30.787701 1135488 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 22:16:30.787713 1135488 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 22:16:30.787723 1135488 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 22:16:48.564650 1135488 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 22:16:48.564693 1135488 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:16:48.564736 1135488 start.go:360] acquireMachinesLock for addons-789752: {Name:mka636defb696345efb99c891c420d0f693c9864 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:16:48.565511 1135488 start.go:364] duration metric: took 748.088µs to acquireMachinesLock for "addons-789752"
	I1027 22:16:48.565552 1135488 start.go:93] Provisioning new machine with config: &{Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:16:48.565634 1135488 start.go:125] createHost starting for "" (driver="docker")
	I1027 22:16:48.569008 1135488 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 22:16:48.569236 1135488 start.go:159] libmachine.API.Create for "addons-789752" (driver="docker")
	I1027 22:16:48.569271 1135488 client.go:173] LocalClient.Create starting
	I1027 22:16:48.569397 1135488 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem
	I1027 22:16:49.384714 1135488 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem
	I1027 22:16:49.775495 1135488 cli_runner.go:164] Run: docker network inspect addons-789752 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 22:16:49.792236 1135488 cli_runner.go:211] docker network inspect addons-789752 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 22:16:49.792319 1135488 network_create.go:284] running [docker network inspect addons-789752] to gather additional debugging logs...
	I1027 22:16:49.792340 1135488 cli_runner.go:164] Run: docker network inspect addons-789752
	W1027 22:16:49.808583 1135488 cli_runner.go:211] docker network inspect addons-789752 returned with exit code 1
	I1027 22:16:49.808614 1135488 network_create.go:287] error running [docker network inspect addons-789752]: docker network inspect addons-789752: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-789752 not found
	I1027 22:16:49.808627 1135488 network_create.go:289] output of [docker network inspect addons-789752]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-789752 not found
	
	** /stderr **
	I1027 22:16:49.808724 1135488 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:16:49.824904 1135488 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c80640}
	I1027 22:16:49.824949 1135488 network_create.go:124] attempt to create docker network addons-789752 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 22:16:49.825004 1135488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-789752 addons-789752
	I1027 22:16:49.883599 1135488 network_create.go:108] docker network addons-789752 192.168.49.0/24 created
	I1027 22:16:49.883641 1135488 kic.go:121] calculated static IP "192.168.49.2" for the "addons-789752" container
	I1027 22:16:49.883736 1135488 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 22:16:49.898989 1135488 cli_runner.go:164] Run: docker volume create addons-789752 --label name.minikube.sigs.k8s.io=addons-789752 --label created_by.minikube.sigs.k8s.io=true
	I1027 22:16:49.915708 1135488 oci.go:103] Successfully created a docker volume addons-789752
	I1027 22:16:49.915807 1135488 cli_runner.go:164] Run: docker run --rm --name addons-789752-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789752 --entrypoint /usr/bin/test -v addons-789752:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 22:16:51.548725 1135488 cli_runner.go:217] Completed: docker run --rm --name addons-789752-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789752 --entrypoint /usr/bin/test -v addons-789752:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.632875871s)
	I1027 22:16:51.548759 1135488 oci.go:107] Successfully prepared a docker volume addons-789752
	I1027 22:16:51.548795 1135488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:16:51.548818 1135488 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 22:16:51.548883 1135488 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-789752:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 22:16:56.202232 1135488 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-789752:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.653298357s)
	I1027 22:16:56.202266 1135488 kic.go:203] duration metric: took 4.653444493s to extract preloaded images to volume ...
	W1027 22:16:56.202409 1135488 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 22:16:56.202533 1135488 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 22:16:56.261779 1135488 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-789752 --name addons-789752 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-789752 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-789752 --network addons-789752 --ip 192.168.49.2 --volume addons-789752:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 22:16:56.572116 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Running}}
	I1027 22:16:56.592646 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:16:56.614601 1135488 cli_runner.go:164] Run: docker exec addons-789752 stat /var/lib/dpkg/alternatives/iptables
	I1027 22:16:56.660529 1135488 oci.go:144] the created container "addons-789752" has a running status.
	I1027 22:16:56.660563 1135488 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa...
	I1027 22:16:57.111763 1135488 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 22:16:57.131337 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:16:57.148556 1135488 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 22:16:57.148575 1135488 kic_runner.go:114] Args: [docker exec --privileged addons-789752 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 22:16:57.188824 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:16:57.205446 1135488 machine.go:94] provisionDockerMachine start ...
	I1027 22:16:57.205585 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:16:57.222727 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:16:57.223066 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:16:57.223082 1135488 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:16:57.223721 1135488 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 22:17:00.477344 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-789752
	
	I1027 22:17:00.477421 1135488 ubuntu.go:182] provisioning hostname "addons-789752"
	I1027 22:17:00.477521 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:00.509330 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:17:00.509682 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:17:00.509695 1135488 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-789752 && echo "addons-789752" | sudo tee /etc/hostname
	I1027 22:17:00.680350 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-789752
	
	I1027 22:17:00.680450 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:00.700903 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:17:00.701236 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:17:00.701252 1135488 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-789752' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-789752/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-789752' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:17:00.850894 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:17:00.850924 1135488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 22:17:00.850944 1135488 ubuntu.go:190] setting up certificates
	I1027 22:17:00.850954 1135488 provision.go:84] configureAuth start
	I1027 22:17:00.851019 1135488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789752
	I1027 22:17:00.869776 1135488 provision.go:143] copyHostCerts
	I1027 22:17:00.869866 1135488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 22:17:00.870002 1135488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 22:17:00.870065 1135488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 22:17:00.870120 1135488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.addons-789752 san=[127.0.0.1 192.168.49.2 addons-789752 localhost minikube]
	I1027 22:17:00.959965 1135488 provision.go:177] copyRemoteCerts
	I1027 22:17:00.960040 1135488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:17:00.960080 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:00.977718 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.083033 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 22:17:01.103119 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 22:17:01.122559 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 22:17:01.143200 1135488 provision.go:87] duration metric: took 292.220535ms to configureAuth
	I1027 22:17:01.143226 1135488 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:17:01.143432 1135488 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:17:01.143535 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.163024 1135488 main.go:143] libmachine: Using SSH client type: native
	I1027 22:17:01.163387 1135488 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34244 <nil> <nil>}
	I1027 22:17:01.163410 1135488 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:17:01.432338 1135488 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:17:01.432358 1135488 machine.go:97] duration metric: took 4.226889139s to provisionDockerMachine
	I1027 22:17:01.432369 1135488 client.go:176] duration metric: took 12.863090915s to LocalClient.Create
	I1027 22:17:01.432382 1135488 start.go:167] duration metric: took 12.863147105s to libmachine.API.Create "addons-789752"
	I1027 22:17:01.432390 1135488 start.go:293] postStartSetup for "addons-789752" (driver="docker")
	I1027 22:17:01.432399 1135488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:17:01.432475 1135488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:17:01.432514 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.459297 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.567203 1135488 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:17:01.570847 1135488 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:17:01.570882 1135488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:17:01.570896 1135488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 22:17:01.571018 1135488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 22:17:01.571065 1135488 start.go:296] duration metric: took 138.668869ms for postStartSetup
	I1027 22:17:01.571477 1135488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789752
	I1027 22:17:01.589818 1135488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/config.json ...
	I1027 22:17:01.590153 1135488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:17:01.590201 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.608387 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.712289 1135488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:17:01.717212 1135488 start.go:128] duration metric: took 13.15155971s to createHost
	I1027 22:17:01.717241 1135488 start.go:83] releasing machines lock for "addons-789752", held for 13.151711572s
	I1027 22:17:01.717312 1135488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-789752
	I1027 22:17:01.734454 1135488 ssh_runner.go:195] Run: cat /version.json
	I1027 22:17:01.734526 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.734591 1135488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:17:01.734673 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:01.756293 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.760687 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:01.955021 1135488 ssh_runner.go:195] Run: systemctl --version
	I1027 22:17:01.961591 1135488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:17:01.999599 1135488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:17:02.005063 1135488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:17:02.005221 1135488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:17:02.039316 1135488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 22:17:02.039394 1135488 start.go:496] detecting cgroup driver to use...
	I1027 22:17:02.039443 1135488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 22:17:02.039532 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:17:02.058293 1135488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:17:02.072095 1135488 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:17:02.072211 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:17:02.091763 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:17:02.112262 1135488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:17:02.246849 1135488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:17:02.387792 1135488 docker.go:234] disabling docker service ...
	I1027 22:17:02.387950 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:17:02.414740 1135488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:17:02.428728 1135488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:17:02.550070 1135488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:17:02.673547 1135488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:17:02.687693 1135488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:17:02.703825 1135488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:17:02.703902 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.713391 1135488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 22:17:02.713513 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.723245 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.732494 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.741683 1135488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:17:02.750467 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.759664 1135488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.774315 1135488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:17:02.783707 1135488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:17:02.792777 1135488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:17:02.800695 1135488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:17:02.915786 1135488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:17:03.051340 1135488 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:17:03.051461 1135488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:17:03.055943 1135488 start.go:564] Will wait 60s for crictl version
	I1027 22:17:03.056032 1135488 ssh_runner.go:195] Run: which crictl
	I1027 22:17:03.060404 1135488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:17:03.088948 1135488 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:17:03.089108 1135488 ssh_runner.go:195] Run: crio --version
	I1027 22:17:03.120282 1135488 ssh_runner.go:195] Run: crio --version
	I1027 22:17:03.152297 1135488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:17:03.155230 1135488 cli_runner.go:164] Run: docker network inspect addons-789752 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:17:03.172436 1135488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 22:17:03.176766 1135488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:17:03.188132 1135488 kubeadm.go:884] updating cluster {Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:17:03.188255 1135488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:17:03.188318 1135488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:17:03.226466 1135488 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:17:03.226497 1135488 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:17:03.226556 1135488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:17:03.252494 1135488 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:17:03.252520 1135488 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:17:03.252529 1135488 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 22:17:03.252618 1135488 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-789752 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:17:03.252714 1135488 ssh_runner.go:195] Run: crio config
	I1027 22:17:03.307838 1135488 cni.go:84] Creating CNI manager for ""
	I1027 22:17:03.307908 1135488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:17:03.307956 1135488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:17:03.308009 1135488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-789752 NodeName:addons-789752 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:17:03.308185 1135488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-789752"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:17:03.308278 1135488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:17:03.316737 1135488 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:17:03.316816 1135488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:17:03.325120 1135488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 22:17:03.338361 1135488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:17:03.351971 1135488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 22:17:03.365613 1135488 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:17:03.369479 1135488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 22:17:03.380402 1135488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:17:03.496853 1135488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:17:03.514973 1135488 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752 for IP: 192.168.49.2
	I1027 22:17:03.515004 1135488 certs.go:195] generating shared ca certs ...
	I1027 22:17:03.515035 1135488 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:03.515207 1135488 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 22:17:04.092899 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt ...
	I1027 22:17:04.092935 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt: {Name:mk3d1ca9953d79b82e69ddd2b9bf0e1e9d4fc081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:04.093759 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key ...
	I1027 22:17:04.093779 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key: {Name:mk37097ff8d48d4c2d9e5dcc3749355e59f34b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:04.093872 1135488 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 22:17:05.073432 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt ...
	I1027 22:17:05.073468 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt: {Name:mkc6c7fe2cd51ad060e70d00520c07e6b8c3502c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.073671 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key ...
	I1027 22:17:05.073686 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key: {Name:mk769d80a91bd0cfa1b5e6c741e3a5507bd17b68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.073776 1135488 certs.go:257] generating profile certs ...
	I1027 22:17:05.073835 1135488 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.key
	I1027 22:17:05.073854 1135488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt with IP's: []
	I1027 22:17:05.320429 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt ...
	I1027 22:17:05.320475 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: {Name:mk9700168e780e5824228759b3d5fa3c0e849cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.320673 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.key ...
	I1027 22:17:05.320686 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.key: {Name:mk1f49e59a2be1496fcf09d2ca87a4f86d10357e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:05.320780 1135488 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92
	I1027 22:17:05.320800 1135488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 22:17:06.349038 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92 ...
	I1027 22:17:06.349073 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92: {Name:mk6add6615215d0c06da589649660f246c0aa3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.349907 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92 ...
	I1027 22:17:06.349926 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92: {Name:mk213c3bbf37fb1f4c149ede56d89ea432225480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.350015 1135488 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt.4f3a3f92 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt
	I1027 22:17:06.350109 1135488 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key.4f3a3f92 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key
	I1027 22:17:06.350166 1135488 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key
	I1027 22:17:06.350187 1135488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt with IP's: []
	I1027 22:17:06.752934 1135488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt ...
	I1027 22:17:06.752972 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt: {Name:mk70691924aeec9f578f4353fa8dfa906deb8f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.753174 1135488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key ...
	I1027 22:17:06.753196 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key: {Name:mk0c09fe9610f9d81659d14ba30d07312ecd3100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:06.753413 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:17:06.753455 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 22:17:06.753484 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:17:06.753511 1135488 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 22:17:06.754065 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:17:06.773288 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 22:17:06.793546 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:17:06.813000 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:17:06.832014 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 22:17:06.851545 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 22:17:06.871465 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:17:06.889725 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 22:17:06.908761 1135488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:17:06.928158 1135488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:17:06.941863 1135488 ssh_runner.go:195] Run: openssl version
	I1027 22:17:06.948517 1135488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:17:06.957282 1135488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:17:06.961320 1135488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:17:06.961430 1135488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:17:07.009238 1135488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:17:07.018240 1135488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:17:07.021837 1135488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 22:17:07.021885 1135488 kubeadm.go:401] StartCluster: {Name:addons-789752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-789752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:17:07.021974 1135488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:17:07.022034 1135488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:17:07.050835 1135488 cri.go:89] found id: ""
	I1027 22:17:07.050923 1135488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:17:07.059062 1135488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:17:07.067179 1135488 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 22:17:07.067249 1135488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:17:07.076034 1135488 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 22:17:07.076074 1135488 kubeadm.go:158] found existing configuration files:
	
	I1027 22:17:07.076131 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 22:17:07.084492 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 22:17:07.084563 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 22:17:07.092609 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 22:17:07.101343 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 22:17:07.101536 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:17:07.109643 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 22:17:07.117976 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 22:17:07.118046 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:17:07.125662 1135488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 22:17:07.135490 1135488 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 22:17:07.135629 1135488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:17:07.143353 1135488 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 22:17:07.198083 1135488 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 22:17:07.199256 1135488 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 22:17:07.240585 1135488 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 22:17:07.240661 1135488 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 22:17:07.240709 1135488 kubeadm.go:319] OS: Linux
	I1027 22:17:07.240764 1135488 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 22:17:07.240819 1135488 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 22:17:07.240872 1135488 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 22:17:07.240927 1135488 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 22:17:07.240982 1135488 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 22:17:07.241034 1135488 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 22:17:07.241086 1135488 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 22:17:07.241142 1135488 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 22:17:07.241196 1135488 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 22:17:07.311967 1135488 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 22:17:07.312098 1135488 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 22:17:07.312200 1135488 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 22:17:07.322905 1135488 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 22:17:07.329092 1135488 out.go:252]   - Generating certificates and keys ...
	I1027 22:17:07.329281 1135488 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 22:17:07.329416 1135488 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 22:17:08.892933 1135488 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 22:17:09.406541 1135488 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 22:17:10.738372 1135488 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 22:17:11.101915 1135488 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 22:17:11.593193 1135488 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 22:17:11.593553 1135488 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-789752 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 22:17:12.332425 1135488 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 22:17:12.332784 1135488 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-789752 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 22:17:12.506737 1135488 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 22:17:13.577590 1135488 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 22:17:14.609769 1135488 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 22:17:14.610070 1135488 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 22:17:15.043944 1135488 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 22:17:16.209102 1135488 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 22:17:16.344266 1135488 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 22:17:16.899742 1135488 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 22:17:18.627074 1135488 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 22:17:18.628223 1135488 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 22:17:18.631290 1135488 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 22:17:18.634739 1135488 out.go:252]   - Booting up control plane ...
	I1027 22:17:18.634843 1135488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 22:17:18.634925 1135488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 22:17:18.636136 1135488 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 22:17:18.652465 1135488 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 22:17:18.652821 1135488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 22:17:18.660208 1135488 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 22:17:18.660565 1135488 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 22:17:18.660804 1135488 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 22:17:18.787766 1135488 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 22:17:18.787891 1135488 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 22:17:19.788564 1135488 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000891412s
	I1027 22:17:19.792584 1135488 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 22:17:19.792688 1135488 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 22:17:19.793019 1135488 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 22:17:19.793110 1135488 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 22:17:22.595821 1135488 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.802790665s
	I1027 22:17:23.693531 1135488 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.900924176s
	I1027 22:17:25.795276 1135488 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002582828s
	I1027 22:17:25.814592 1135488 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 22:17:25.834186 1135488 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 22:17:25.847886 1135488 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 22:17:25.848113 1135488 kubeadm.go:319] [mark-control-plane] Marking the node addons-789752 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 22:17:25.861453 1135488 kubeadm.go:319] [bootstrap-token] Using token: yt42fj.1l94hwf0zkgx61b4
	I1027 22:17:25.864650 1135488 out.go:252]   - Configuring RBAC rules ...
	I1027 22:17:25.864800 1135488 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 22:17:25.869081 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 22:17:25.877392 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 22:17:25.883954 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 22:17:25.888031 1135488 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 22:17:25.892008 1135488 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 22:17:26.202130 1135488 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 22:17:26.630775 1135488 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 22:17:27.202687 1135488 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 22:17:27.203910 1135488 kubeadm.go:319] 
	I1027 22:17:27.203996 1135488 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 22:17:27.204025 1135488 kubeadm.go:319] 
	I1027 22:17:27.204112 1135488 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 22:17:27.204121 1135488 kubeadm.go:319] 
	I1027 22:17:27.204149 1135488 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 22:17:27.204216 1135488 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 22:17:27.204273 1135488 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 22:17:27.204282 1135488 kubeadm.go:319] 
	I1027 22:17:27.204339 1135488 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 22:17:27.204348 1135488 kubeadm.go:319] 
	I1027 22:17:27.204399 1135488 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 22:17:27.204407 1135488 kubeadm.go:319] 
	I1027 22:17:27.204463 1135488 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 22:17:27.204547 1135488 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 22:17:27.204623 1135488 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 22:17:27.204632 1135488 kubeadm.go:319] 
	I1027 22:17:27.204721 1135488 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 22:17:27.204809 1135488 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 22:17:27.204818 1135488 kubeadm.go:319] 
	I1027 22:17:27.204906 1135488 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yt42fj.1l94hwf0zkgx61b4 \
	I1027 22:17:27.205019 1135488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 22:17:27.205043 1135488 kubeadm.go:319] 	--control-plane 
	I1027 22:17:27.205051 1135488 kubeadm.go:319] 
	I1027 22:17:27.205141 1135488 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 22:17:27.205150 1135488 kubeadm.go:319] 
	I1027 22:17:27.205236 1135488 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yt42fj.1l94hwf0zkgx61b4 \
	I1027 22:17:27.205353 1135488 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 22:17:27.209538 1135488 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 22:17:27.209774 1135488 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 22:17:27.209890 1135488 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 22:17:27.209910 1135488 cni.go:84] Creating CNI manager for ""
	I1027 22:17:27.209923 1135488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:17:27.213098 1135488 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:17:27.215846 1135488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:17:27.219873 1135488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:17:27.219897 1135488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:17:27.233096 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:17:27.528495 1135488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:17:27.528633 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:27.528740 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-789752 minikube.k8s.io/updated_at=2025_10_27T22_17_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=addons-789752 minikube.k8s.io/primary=true
	I1027 22:17:27.679149 1135488 ops.go:34] apiserver oom_adj: -16
	I1027 22:17:27.679256 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:28.179980 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:28.679634 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:29.180264 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:29.680127 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:30.179664 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:30.679590 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:31.180098 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:31.679963 1135488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 22:17:31.832844 1135488 kubeadm.go:1114] duration metric: took 4.304251448s to wait for elevateKubeSystemPrivileges
	I1027 22:17:31.832881 1135488 kubeadm.go:403] duration metric: took 24.810998943s to StartCluster
	I1027 22:17:31.832899 1135488 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:31.833657 1135488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:17:31.834059 1135488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:17:31.834278 1135488 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:17:31.834436 1135488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 22:17:31.834708 1135488 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:17:31.834746 1135488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 22:17:31.834831 1135488 addons.go:69] Setting yakd=true in profile "addons-789752"
	I1027 22:17:31.834851 1135488 addons.go:238] Setting addon yakd=true in "addons-789752"
	I1027 22:17:31.834873 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.835358 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.835804 1135488 addons.go:69] Setting inspektor-gadget=true in profile "addons-789752"
	I1027 22:17:31.835827 1135488 addons.go:238] Setting addon inspektor-gadget=true in "addons-789752"
	I1027 22:17:31.835851 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.836275 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.836418 1135488 addons.go:69] Setting metrics-server=true in profile "addons-789752"
	I1027 22:17:31.836448 1135488 addons.go:238] Setting addon metrics-server=true in "addons-789752"
	I1027 22:17:31.836506 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.836928 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.838593 1135488 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-789752"
	I1027 22:17:31.838625 1135488 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-789752"
	I1027 22:17:31.838662 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.839109 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.839547 1135488 addons.go:69] Setting registry=true in profile "addons-789752"
	I1027 22:17:31.839575 1135488 addons.go:238] Setting addon registry=true in "addons-789752"
	I1027 22:17:31.839601 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.840036 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.855237 1135488 addons.go:69] Setting registry-creds=true in profile "addons-789752"
	I1027 22:17:31.855273 1135488 addons.go:238] Setting addon registry-creds=true in "addons-789752"
	I1027 22:17:31.855309 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.855770 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.857893 1135488 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-789752"
	I1027 22:17:31.857928 1135488 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-789752"
	I1027 22:17:31.857962 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.858448 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.871265 1135488 addons.go:69] Setting cloud-spanner=true in profile "addons-789752"
	I1027 22:17:31.871315 1135488 addons.go:238] Setting addon cloud-spanner=true in "addons-789752"
	I1027 22:17:31.871350 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.871810 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.884412 1135488 addons.go:69] Setting storage-provisioner=true in profile "addons-789752"
	I1027 22:17:31.884453 1135488 addons.go:238] Setting addon storage-provisioner=true in "addons-789752"
	I1027 22:17:31.884486 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.884974 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.888417 1135488 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-789752"
	I1027 22:17:31.888480 1135488 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-789752"
	I1027 22:17:31.888509 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.888971 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.897146 1135488 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-789752"
	I1027 22:17:31.898473 1135488 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-789752"
	I1027 22:17:31.898838 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.924133 1135488 addons.go:69] Setting volcano=true in profile "addons-789752"
	I1027 22:17:31.924168 1135488 addons.go:238] Setting addon volcano=true in "addons-789752"
	I1027 22:17:31.924204 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.924684 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.925866 1135488 addons.go:69] Setting default-storageclass=true in profile "addons-789752"
	I1027 22:17:31.925896 1135488 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-789752"
	I1027 22:17:31.926200 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.951600 1135488 addons.go:69] Setting gcp-auth=true in profile "addons-789752"
	I1027 22:17:31.951635 1135488 mustload.go:66] Loading cluster: addons-789752
	I1027 22:17:31.951848 1135488 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:17:31.952109 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.953826 1135488 addons.go:69] Setting volumesnapshots=true in profile "addons-789752"
	I1027 22:17:31.953904 1135488 addons.go:238] Setting addon volumesnapshots=true in "addons-789752"
	I1027 22:17:31.953962 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.954479 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.979363 1135488 addons.go:69] Setting ingress=true in profile "addons-789752"
	I1027 22:17:31.979398 1135488 addons.go:238] Setting addon ingress=true in "addons-789752"
	I1027 22:17:31.979454 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:31.979913 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:31.983030 1135488 out.go:179] * Verifying Kubernetes components...
	I1027 22:17:31.991563 1135488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:17:31.999781 1135488 addons.go:69] Setting ingress-dns=true in profile "addons-789752"
	I1027 22:17:31.999880 1135488 addons.go:238] Setting addon ingress-dns=true in "addons-789752"
	I1027 22:17:31.999928 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.000426 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:32.014544 1135488 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1027 22:17:32.017940 1135488 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 22:17:32.018188 1135488 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 22:17:32.018227 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 22:17:32.018476 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.018863 1135488 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 22:17:32.018250 1135488 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 22:17:32.028781 1135488 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 22:17:32.032086 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 22:17:32.032167 1135488 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 22:17:32.032271 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.052641 1135488 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 22:17:32.052720 1135488 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 22:17:32.053239 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.054163 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 22:17:32.054206 1135488 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 22:17:32.063802 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.090354 1135488 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 22:17:32.093872 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 22:17:32.098937 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 22:17:32.099045 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.102643 1135488 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 22:17:32.105322 1135488 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 22:17:32.105806 1135488 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 22:17:32.108228 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 22:17:32.108249 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 22:17:32.108311 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.119744 1135488 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 22:17:32.119811 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 22:17:32.119903 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.139338 1135488 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 22:17:32.139362 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 22:17:32.139437 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.157592 1135488 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:17:32.160862 1135488 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:17:32.160889 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:17:32.160958 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	W1027 22:17:32.179690 1135488 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 22:17:32.204186 1135488 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-789752"
	I1027 22:17:32.204230 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.204653 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:32.226365 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 22:17:32.228622 1135488 addons.go:238] Setting addon default-storageclass=true in "addons-789752"
	I1027 22:17:32.228660 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.229064 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:32.244696 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 22:17:32.251329 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 22:17:32.257003 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 22:17:32.257029 1135488 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 22:17:32.257111 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.257301 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 22:17:32.263843 1135488 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 22:17:32.266876 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 22:17:32.266989 1135488 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 22:17:32.267000 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 22:17:32.267058 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.285755 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 22:17:32.290535 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 22:17:32.294540 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 22:17:32.298519 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 22:17:32.302616 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 22:17:32.302801 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:32.311473 1135488 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 22:17:32.311600 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 22:17:32.315425 1135488 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 22:17:32.315448 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 22:17:32.315510 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.315721 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 22:17:32.315733 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 22:17:32.315781 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.371241 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.382446 1135488 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 22:17:32.385467 1135488 out.go:179]   - Using image docker.io/busybox:stable
	I1027 22:17:32.385609 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.388613 1135488 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 22:17:32.388637 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 22:17:32.388753 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.394709 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.420624 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.420670 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.423964 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.442680 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.478604 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.496075 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.498646 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.509923 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.535214 1135488 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:17:32.535235 1135488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:17:32.535308 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:32.535544 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	W1027 22:17:32.544102 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.544201 1135488 retry.go:31] will retry after 233.663306ms: ssh: handshake failed: EOF
	I1027 22:17:32.549768 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.559083 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	W1027 22:17:32.563825 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.563851 1135488 retry.go:31] will retry after 302.013161ms: ssh: handshake failed: EOF
	I1027 22:17:32.578272 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:32.748784 1135488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 22:17:32.749041 1135488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1027 22:17:32.779496 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.779522 1135488 retry.go:31] will retry after 194.824224ms: ssh: handshake failed: EOF
	W1027 22:17:32.868174 1135488 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 22:17:32.868210 1135488 retry.go:31] will retry after 439.617533ms: ssh: handshake failed: EOF
	I1027 22:17:33.043502 1135488 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:33.043531 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 22:17:33.049672 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 22:17:33.107706 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 22:17:33.107769 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 22:17:33.120976 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 22:17:33.121051 1135488 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 22:17:33.128395 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 22:17:33.143070 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 22:17:33.158857 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 22:17:33.205985 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 22:17:33.206007 1135488 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 22:17:33.222061 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:17:33.225381 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:17:33.240303 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:33.296925 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 22:17:33.296994 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 22:17:33.300261 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 22:17:33.300325 1135488 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 22:17:33.305513 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 22:17:33.311232 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 22:17:33.321494 1135488 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 22:17:33.321563 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 22:17:33.413006 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 22:17:33.413086 1135488 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 22:17:33.512101 1135488 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 22:17:33.512178 1135488 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 22:17:33.535736 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 22:17:33.535804 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 22:17:33.570249 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 22:17:33.615329 1135488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 22:17:33.615405 1135488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 22:17:33.624448 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 22:17:33.624520 1135488 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 22:17:33.731555 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 22:17:33.734905 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 22:17:33.734971 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 22:17:33.763905 1135488 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 22:17:33.763969 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 22:17:33.765939 1135488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 22:17:33.766003 1135488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 22:17:33.956042 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 22:17:33.956128 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 22:17:33.957173 1135488 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 22:17:33.957222 1135488 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 22:17:33.984725 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 22:17:34.102422 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 22:17:34.148787 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 22:17:34.148863 1135488 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 22:17:34.226692 1135488 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 22:17:34.226759 1135488 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 22:17:34.374655 1135488 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 22:17:34.374727 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 22:17:34.413208 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 22:17:34.413274 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 22:17:34.572422 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 22:17:34.572498 1135488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 22:17:34.584287 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 22:17:34.784913 1135488 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.035809863s)
	I1027 22:17:34.784993 1135488 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.036139476s)
	I1027 22:17:34.785076 1135488 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 22:17:34.786695 1135488 node_ready.go:35] waiting up to 6m0s for node "addons-789752" to be "Ready" ...
	I1027 22:17:34.840601 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 22:17:34.840666 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 22:17:34.963163 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.913450691s)
	I1027 22:17:35.104223 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 22:17:35.104543 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 22:17:35.268927 1135488 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 22:17:35.268949 1135488 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 22:17:35.290907 1135488 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-789752" context rescaled to 1 replicas
	I1027 22:17:35.413332 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1027 22:17:36.823648 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:38.015395 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.886915728s)
	I1027 22:17:38.015433 1135488 addons.go:479] Verifying addon ingress=true in "addons-789752"
	I1027 22:17:38.015606 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.872457534s)
	I1027 22:17:38.015770 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.856839349s)
	I1027 22:17:38.015854 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.793776677s)
	I1027 22:17:38.015972 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.775605522s)
	W1027 22:17:38.015995 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:38.016012 1135488 retry.go:31] will retry after 259.502133ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:38.016054 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.710477319s)
	I1027 22:17:38.016099 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.704806841s)
	I1027 22:17:38.016131 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.445816965s)
	I1027 22:17:38.016144 1135488 addons.go:479] Verifying addon registry=true in "addons-789752"
	I1027 22:17:38.016231 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.790492792s)
	I1027 22:17:38.016705 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.285080204s)
	I1027 22:17:38.016734 1135488 addons.go:479] Verifying addon metrics-server=true in "addons-789752"
	I1027 22:17:38.016776 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.03197535s)
	I1027 22:17:38.016918 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.914408376s)
	I1027 22:17:38.017077 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.43270494s)
	W1027 22:17:38.017106 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 22:17:38.017122 1135488 retry.go:31] will retry after 159.932379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 22:17:38.019634 1135488 out.go:179] * Verifying registry addon...
	I1027 22:17:38.019674 1135488 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-789752 service yakd-dashboard -n yakd-dashboard
	
	I1027 22:17:38.019802 1135488 out.go:179] * Verifying ingress addon...
	I1027 22:17:38.024285 1135488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 22:17:38.025280 1135488 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 22:17:38.036245 1135488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 22:17:38.036310 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:38.037182 1135488 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 22:17:38.037200 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:38.063123 1135488 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1027 22:17:38.178039 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 22:17:38.276013 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:38.350642 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.937221549s)
	I1027 22:17:38.350676 1135488 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-789752"
	I1027 22:17:38.356981 1135488 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 22:17:38.359827 1135488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 22:17:38.370160 1135488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 22:17:38.370186 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:38.529347 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:38.530016 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:38.864394 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:39.030207 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:39.030323 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 22:17:39.289886 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:39.363795 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:39.527479 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:39.528512 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:39.863353 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:39.912705 1135488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 22:17:39.912785 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:39.930296 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:40.028217 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:40.028800 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:40.049714 1135488 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 22:17:40.064703 1135488 addons.go:238] Setting addon gcp-auth=true in "addons-789752"
	I1027 22:17:40.064764 1135488 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:17:40.065203 1135488 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:17:40.082707 1135488 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 22:17:40.082773 1135488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:17:40.100322 1135488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:17:40.363314 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:40.527608 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:40.528526 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:40.867150 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:41.033048 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:41.033444 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:41.123706 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.945614677s)
	I1027 22:17:41.123835 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.847787093s)
	W1027 22:17:41.123857 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:41.123869 1135488 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.041124811s)
	I1027 22:17:41.123875 1135488 retry.go:31] will retry after 545.63937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:41.126991 1135488 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 22:17:41.129876 1135488 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 22:17:41.132662 1135488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 22:17:41.132690 1135488 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 22:17:41.146129 1135488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 22:17:41.146199 1135488 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 22:17:41.159674 1135488 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 22:17:41.159698 1135488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 22:17:41.173488 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1027 22:17:41.290614 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:41.365504 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:41.530205 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:41.601087 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:41.669554 1135488 addons.go:479] Verifying addon gcp-auth=true in "addons-789752"
	I1027 22:17:41.669801 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:41.672791 1135488 out.go:179] * Verifying gcp-auth addon...
	I1027 22:17:41.676424 1135488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 22:17:41.697727 1135488 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 22:17:41.697798 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:41.863857 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:42.028181 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:42.029758 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:42.180707 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:42.363632 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 22:17:42.520225 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:42.520299 1135488 retry.go:31] will retry after 490.646191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:42.527418 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:42.528460 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:42.681075 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:42.862881 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:43.011229 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:43.028053 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:43.028731 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:43.180575 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:43.291250 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:43.363723 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:43.527510 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:43.530429 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:43.679890 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:43.859257 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:43.859291 1135488 retry.go:31] will retry after 729.731699ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:43.862869 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:44.029040 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:44.029304 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:44.180786 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:44.364732 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:44.527726 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:44.528839 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:44.590007 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:44.686540 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:44.863324 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:45.047005 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:45.051195 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:45.181781 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:45.292545 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:45.364825 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:45.530519 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:45.530771 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:45.535017 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:45.535054 1135488 retry.go:31] will retry after 757.383788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:45.680779 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:45.863213 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:46.027139 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:46.028535 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:46.179469 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:46.292576 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:46.366066 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:46.528948 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:46.529123 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:46.688597 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:46.863757 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:47.029552 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:47.029965 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:47.102550 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:47.102653 1135488 retry.go:31] will retry after 1.667215898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:47.179723 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:47.363092 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:47.527404 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:47.527945 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:47.682483 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:47.790200 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:47.863206 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:48.027442 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:48.028685 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:48.179806 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:48.363759 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:48.528204 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:48.528353 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:48.692219 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:48.770445 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:48.863527 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:49.029017 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:49.030296 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:49.190442 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:49.363640 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:49.528767 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:49.529387 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 22:17:49.583076 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:49.583106 1135488 retry.go:31] will retry after 3.944383448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:49.687379 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:49.862876 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:50.028975 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:50.029329 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:50.180223 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:50.290543 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:50.362693 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:50.529152 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:50.529344 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:50.680144 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:50.863105 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:51.027905 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:51.029467 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:51.179635 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:51.363531 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:51.528638 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:51.528678 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:51.685232 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:51.863324 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:52.028604 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:52.028796 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:52.179934 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:52.362735 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:52.528686 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:52.528836 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:52.685443 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:52.790346 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:52.863212 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:53.027349 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:53.028442 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:53.179830 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:53.363432 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:53.527006 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:53.528272 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:53.529834 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:53.680036 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:53.863760 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:54.029674 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:54.029903 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:54.180022 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:54.328716 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:54.328751 1135488 retry.go:31] will retry after 5.199387802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:17:54.363362 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:54.527619 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:54.528250 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:54.686317 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:54.863002 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:55.028845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:55.029136 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:55.179953 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:55.290121 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:55.363109 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:55.529885 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:55.530116 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:55.686240 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:55.862903 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:56.028547 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:56.028788 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:56.179872 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:56.363971 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:56.528236 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:56.528286 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:56.684572 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:56.863509 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:57.027741 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:57.029740 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:57.179657 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:57.290473 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:57.363732 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:57.527464 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:57.531193 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:57.687794 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:57.862955 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:58.028077 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:58.028711 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:58.179825 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:58.363314 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:58.527416 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:58.527870 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:58.685549 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:58.863192 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:59.027484 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:59.029312 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:59.180240 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:17:59.363432 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:17:59.527303 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:17:59.527933 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:17:59.528944 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:17:59.680451 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:17:59.790900 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:17:59.863472 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:00.044540 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:00.045566 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:00.201716 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:00.365245 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:00.529482 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:00.529826 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:00.543601 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.014624652s)
	W1027 22:18:00.543641 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:00.543666 1135488 retry.go:31] will retry after 6.34078197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:00.685067 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:00.862804 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:01.028219 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:01.028391 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:01.179437 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:01.363076 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:01.527751 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:01.528090 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:01.684140 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:01.862773 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:02.027759 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:02.028733 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:02.179817 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:02.289663 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:02.363586 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:02.528251 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:02.528691 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:02.685819 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:02.863812 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:03.027929 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:03.029145 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:03.180200 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:03.363149 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:03.528474 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:03.528615 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:03.685354 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:03.862815 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:04.027971 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:04.028398 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:04.179417 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:04.290218 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:04.363368 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:04.527632 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:04.528542 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:04.685507 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:04.863919 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:05.028652 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:05.028779 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:05.179998 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:05.363258 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:05.527728 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:05.528622 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:05.686029 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:05.862845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:06.028888 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:06.029345 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:06.180237 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:06.290327 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:06.363524 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:06.527542 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:06.528764 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:06.684646 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:06.863716 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:06.884639 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:07.030081 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:07.030537 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:07.179597 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:07.363659 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:07.531089 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:07.531482 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:07.693064 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:07.706848 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:07.706879 1135488 retry.go:31] will retry after 14.118883052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:07.862779 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:08.028584 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:08.029675 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:08.179307 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:08.363270 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:08.528627 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:08.528684 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:08.690921 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:08.789979 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:08.862908 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:09.027899 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:09.029190 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:09.180043 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:09.363145 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:09.528549 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:09.528569 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:09.680015 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:09.863267 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:10.027919 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:10.029374 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:10.180537 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:10.363281 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:10.527250 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:10.528325 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:10.683500 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 22:18:10.790785 1135488 node_ready.go:57] node "addons-789752" has "Ready":"False" status (will retry)
	I1027 22:18:10.863847 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:11.028845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:11.029069 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:11.180120 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:11.363354 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:11.527672 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:11.528494 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:11.680039 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:11.863198 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:12.027375 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:12.028712 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:12.179882 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:12.363026 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:12.527669 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:12.529249 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:12.689499 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:12.863486 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:13.028466 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:13.028848 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:13.202212 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:13.328692 1135488 node_ready.go:49] node "addons-789752" is "Ready"
	I1027 22:18:13.328724 1135488 node_ready.go:38] duration metric: took 38.541979638s for node "addons-789752" to be "Ready" ...
	I1027 22:18:13.328739 1135488 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:18:13.328823 1135488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:18:13.348158 1135488 api_server.go:72] duration metric: took 41.513838898s to wait for apiserver process to appear ...
	I1027 22:18:13.348184 1135488 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:18:13.348205 1135488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 22:18:13.360708 1135488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 22:18:13.362757 1135488 api_server.go:141] control plane version: v1.34.1
	I1027 22:18:13.362785 1135488 api_server.go:131] duration metric: took 14.593933ms to wait for apiserver health ...
	I1027 22:18:13.362797 1135488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:18:13.381529 1135488 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 22:18:13.381604 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:13.382548 1135488 system_pods.go:59] 19 kube-system pods found
	I1027 22:18:13.382646 1135488 system_pods.go:61] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:13.382674 1135488 system_pods.go:61] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending
	I1027 22:18:13.382696 1135488 system_pods.go:61] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending
	I1027 22:18:13.382730 1135488 system_pods.go:61] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending
	I1027 22:18:13.382755 1135488 system_pods.go:61] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:13.382774 1135488 system_pods.go:61] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:13.382810 1135488 system_pods.go:61] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:13.382834 1135488 system_pods.go:61] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:13.382857 1135488 system_pods.go:61] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending
	I1027 22:18:13.382893 1135488 system_pods.go:61] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:13.382918 1135488 system_pods.go:61] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:13.382941 1135488 system_pods.go:61] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:13.382980 1135488 system_pods.go:61] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending
	I1027 22:18:13.383005 1135488 system_pods.go:61] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending
	I1027 22:18:13.383026 1135488 system_pods.go:61] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending
	I1027 22:18:13.383062 1135488 system_pods.go:61] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending
	I1027 22:18:13.383086 1135488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending
	I1027 22:18:13.383104 1135488 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending
	I1027 22:18:13.383138 1135488 system_pods.go:61] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending
	I1027 22:18:13.383162 1135488 system_pods.go:74] duration metric: took 20.357886ms to wait for pod list to return data ...
	I1027 22:18:13.383184 1135488 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:18:13.394555 1135488 default_sa.go:45] found service account: "default"
	I1027 22:18:13.394635 1135488 default_sa.go:55] duration metric: took 11.431042ms for default service account to be created ...
	I1027 22:18:13.394675 1135488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:18:13.404380 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:13.404474 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:13.404494 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending
	I1027 22:18:13.404515 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending
	I1027 22:18:13.404550 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending
	I1027 22:18:13.404569 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:13.404589 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:13.404610 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:13.404644 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:13.404663 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending
	I1027 22:18:13.404680 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:13.404713 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:13.404739 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:13.404768 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending
	I1027 22:18:13.404806 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending
	I1027 22:18:13.404830 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:13.404850 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending
	I1027 22:18:13.404884 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending
	I1027 22:18:13.404907 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending
	I1027 22:18:13.404924 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending
	I1027 22:18:13.404968 1135488 retry.go:31] will retry after 228.262723ms: missing components: kube-dns
	I1027 22:18:13.573100 1135488 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 22:18:13.573175 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:13.574590 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:13.682359 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:13.682488 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:13.682511 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending
	I1027 22:18:13.682559 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending
	I1027 22:18:13.682645 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending
	I1027 22:18:13.682674 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:13.682714 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:13.682737 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:13.682756 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:13.682799 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 22:18:13.682820 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:13.682840 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:13.682879 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:13.682908 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending
	I1027 22:18:13.682933 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 22:18:13.682974 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:13.682997 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending
	I1027 22:18:13.683029 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending
	I1027 22:18:13.683058 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:13.683083 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:18:13.683129 1135488 retry.go:31] will retry after 357.428943ms: missing components: kube-dns
	I1027 22:18:13.720423 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:13.870334 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:14.058646 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:14.069720 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:14.073072 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:14.073195 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:14.073256 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 22:18:14.073293 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 22:18:14.073315 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 22:18:14.073354 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:14.073401 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:14.073464 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:14.073484 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:14.073525 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 22:18:14.073547 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:14.073574 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:14.073610 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:14.073656 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 22:18:14.073707 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 22:18:14.073747 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:14.073791 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 22:18:14.073827 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.073874 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.073910 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:18:14.073963 1135488 retry.go:31] will retry after 331.542918ms: missing components: kube-dns
	I1027 22:18:14.185307 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:14.364262 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:14.422681 1135488 system_pods.go:86] 19 kube-system pods found
	I1027 22:18:14.422731 1135488 system_pods.go:89] "coredns-66bc5c9577-5586j" [ce92129d-e557-4e8c-97b9-d778d8447f67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:18:14.422743 1135488 system_pods.go:89] "csi-hostpath-attacher-0" [51b9885c-3a47-45a8-b119-e37bf23eab06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 22:18:14.422752 1135488 system_pods.go:89] "csi-hostpath-resizer-0" [8596dd50-937a-4378-af75-36bd1facd079] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 22:18:14.422759 1135488 system_pods.go:89] "csi-hostpathplugin-lrbhx" [f0e7bc75-d84d-4a92-9233-e7e5e4934f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 22:18:14.422768 1135488 system_pods.go:89] "etcd-addons-789752" [cf8e0540-6bac-49c3-9b0e-ef24d03fe92d] Running
	I1027 22:18:14.422774 1135488 system_pods.go:89] "kindnet-kn5mv" [b5b9e324-a60d-4dbd-b905-bb17c7a32b8a] Running
	I1027 22:18:14.422784 1135488 system_pods.go:89] "kube-apiserver-addons-789752" [a8fab895-7ef6-4cf2-928d-7d563cdb3917] Running
	I1027 22:18:14.422789 1135488 system_pods.go:89] "kube-controller-manager-addons-789752" [32c9db7f-3cf3-4fef-9add-764e75ba98c1] Running
	I1027 22:18:14.422805 1135488 system_pods.go:89] "kube-ingress-dns-minikube" [30c831ba-9e90-4d98-83a4-3636dc00800b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 22:18:14.422814 1135488 system_pods.go:89] "kube-proxy-d6r65" [eda11ab0-4509-4ed0-a84e-e4a8146e92a1] Running
	I1027 22:18:14.422824 1135488 system_pods.go:89] "kube-scheduler-addons-789752" [e7aba73a-3d2b-4e96-994b-00677241bace] Running
	I1027 22:18:14.422831 1135488 system_pods.go:89] "metrics-server-85b7d694d7-8kfjg" [c1cd9081-6ece-4513-a137-8d3c8a378a70] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 22:18:14.422851 1135488 system_pods.go:89] "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 22:18:14.422857 1135488 system_pods.go:89] "registry-6b586f9694-vw4fc" [827638e6-9844-4d0b-a405-c1752b7deb36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 22:18:14.422864 1135488 system_pods.go:89] "registry-creds-764b6fb674-ldrtc" [bd101187-f370-4b46-8017-bd4f7b44959c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 22:18:14.422877 1135488 system_pods.go:89] "registry-proxy-pxgxr" [f3af9e0b-d8bc-47fc-b5a9-4e6b9d23fc0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 22:18:14.422897 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dz2cc" [e8e9917f-86cb-4682-903c-f394c84eb57f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.422914 1135488 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vxkd6" [576ae499-cdfc-4bd8-a703-22ef0903f4fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 22:18:14.422929 1135488 system_pods.go:89] "storage-provisioner" [5fe23b74-3690-4678-9086-440db4325b59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 22:18:14.422939 1135488 system_pods.go:126] duration metric: took 1.028227355s to wait for k8s-apps to be running ...
	I1027 22:18:14.422959 1135488 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:18:14.423018 1135488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:18:14.445065 1135488 system_svc.go:56] duration metric: took 22.096338ms WaitForService to wait for kubelet
	I1027 22:18:14.445096 1135488 kubeadm.go:587] duration metric: took 42.610782527s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:18:14.445116 1135488 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:18:14.506042 1135488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 22:18:14.506080 1135488 node_conditions.go:123] node cpu capacity is 2
	I1027 22:18:14.506095 1135488 node_conditions.go:105] duration metric: took 60.973225ms to run NodePressure ...
	I1027 22:18:14.506108 1135488 start.go:242] waiting for startup goroutines ...
	I1027 22:18:14.542291 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:14.543618 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:14.679914 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:14.873319 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:15.033178 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:15.033952 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:15.180768 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:15.363562 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:15.530457 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:15.530954 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:15.696674 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:15.864878 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:16.028856 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:16.029060 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:16.180537 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:16.364962 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:16.529286 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:16.529492 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:16.685502 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:16.864380 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:17.029848 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:17.030196 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:17.180682 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:17.363799 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:17.529853 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:17.529990 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:17.683992 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:17.863740 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:18.030898 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:18.031317 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:18.180113 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:18.364576 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:18.528963 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:18.529728 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:18.685621 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:18.864088 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:19.029357 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:19.029514 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:19.179638 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:19.363817 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:19.530181 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:19.530954 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:19.686795 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:19.863094 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:20.029327 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:20.030278 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:20.182147 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:20.364724 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:20.531313 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:20.532382 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:20.694656 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:20.865639 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:21.036377 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:21.036797 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:21.181132 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:21.364690 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:21.530633 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:21.531008 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:21.689301 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:21.826589 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:21.865134 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:22.029543 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:22.030681 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:22.180053 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:22.364584 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:22.529201 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:22.530337 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:22.685896 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:22.828905 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002276164s)
	W1027 22:18:22.828943 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:22.828968 1135488 retry.go:31] will retry after 19.616702267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:22.862973 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:23.028799 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:23.028920 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:23.179738 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:23.364164 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:23.528869 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:23.529280 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:23.682301 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:23.863717 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:24.030206 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:24.030309 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:24.181021 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:24.364430 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:24.531207 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:24.531724 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:24.687330 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:24.863453 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:25.030564 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:25.031050 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:25.180717 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:25.363426 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:25.529758 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:25.530150 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 22:18:25.687165 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:25.864049 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:26.029469 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:26.029668 1135488 kapi.go:107] duration metric: took 48.005386311s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 22:18:26.180113 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:26.364536 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:26.529383 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:26.679845 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:26.863640 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:27.029093 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:27.180065 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:27.364011 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:27.530076 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:27.679864 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:27.865494 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:28.028960 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:28.179797 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:28.370067 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:28.529779 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:28.680956 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:28.863483 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:29.028712 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:29.180557 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:29.364651 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:29.529208 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:29.684093 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:29.863965 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:30.075718 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:30.180439 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:30.364897 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:30.529740 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:30.686973 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:30.863455 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:31.028812 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:31.179544 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:31.363874 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:31.529122 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:31.685777 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:31.863510 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:32.028579 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:32.179650 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:32.364049 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:32.529902 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:32.683328 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:32.864410 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:33.029007 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:33.180494 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:33.364097 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:33.529697 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:33.687372 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:33.864379 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:34.028866 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:34.182898 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:34.363653 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:34.529330 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:34.686497 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:34.863784 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:35.030044 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:35.180829 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:35.364245 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:35.529518 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:35.684916 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:35.864572 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:36.030017 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:36.180460 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:36.364822 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:36.530020 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:36.689993 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:36.863757 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:37.030357 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:37.180041 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:37.363790 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:37.529663 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:37.687786 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:37.864164 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:38.030278 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:38.181281 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:38.363711 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:38.530063 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:38.686853 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:38.864470 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:39.029394 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:39.180796 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:39.363566 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:39.528363 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:39.689486 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:39.863417 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:40.045133 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:40.190850 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:40.363275 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:40.529572 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:40.687952 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:40.863119 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:41.030088 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:41.184605 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:41.364428 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:41.529995 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:41.687131 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:41.863688 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:42.033241 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:42.186371 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:42.365098 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:42.446421 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:42.531938 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:42.685980 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:42.863840 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:43.029631 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:43.180190 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:43.363712 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:43.529376 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:43.565802 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.119342126s)
	W1027 22:18:43.565836 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:43.565856 1135488 retry.go:31] will retry after 13.97949312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 22:18:43.684789 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:43.862872 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:44.030180 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:44.180522 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:44.363724 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:44.529420 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:44.679491 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:44.863660 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:45.029742 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:45.209618 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:45.366535 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:45.529267 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:45.684955 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:45.863661 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:46.029262 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:46.181620 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:46.363957 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:46.529770 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:46.684764 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:46.863895 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:47.029100 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:47.180295 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:47.364212 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:47.528335 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:47.679550 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:47.863521 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:48.029511 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:48.179467 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:48.364319 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:48.528573 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:48.692091 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:48.866367 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:49.029109 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:49.180467 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:49.364125 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:49.528444 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:49.685471 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:49.864426 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:50.031348 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:50.181119 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:50.365256 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:50.538219 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:50.706174 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:50.864864 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:51.029964 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:51.180615 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:51.364746 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:51.529092 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:51.685224 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:51.864053 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:52.030215 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:52.183693 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:52.363691 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:52.529192 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:52.684288 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:52.863255 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:53.029296 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:53.179834 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:53.364030 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:53.529283 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:53.684957 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:53.863023 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:54.029619 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:54.179947 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:54.363482 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:54.528795 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:54.685277 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:54.863258 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:55.028844 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:55.180431 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:55.364017 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:55.530258 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:55.688022 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:55.863887 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:56.030760 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:56.180603 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:56.368745 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:56.529907 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:56.686362 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:56.863889 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:57.028781 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:57.179435 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:57.364215 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:57.528827 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:57.546156 1135488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 22:18:57.682931 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:57.863420 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:58.028579 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:58.179567 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:58.363773 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:58.529240 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:58.692750 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:58.697351 1135488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.15111446s)
	W1027 22:18:58.697404 1135488 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 22:18:58.697502 1135488 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 22:18:58.864151 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:59.029627 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:59.179549 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:59.364617 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:18:59.529292 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:18:59.679624 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:18:59.866147 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:00.030353 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:00.182558 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:00.378682 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:00.534347 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:00.701167 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:00.868484 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:01.028841 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:01.180270 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:01.364663 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:01.529208 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:01.680639 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:01.863510 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:02.029324 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:02.179939 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:02.364026 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:02.529443 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:02.680160 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:02.864246 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:03.028869 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:03.179723 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:03.364295 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:03.529389 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:03.681209 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:03.863928 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:04.029475 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:04.179798 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:04.363401 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:04.531587 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:04.687364 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:04.864133 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:05.032271 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:05.180576 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:05.364090 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:05.529778 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:05.686604 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:05.865330 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:06.028791 1135488 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 22:19:06.188293 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:06.364710 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:06.537770 1135488 kapi.go:107] duration metric: took 1m28.512483917s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 22:19:06.688815 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:06.864079 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:07.180393 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:07.363884 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:07.682821 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:07.863632 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:08.180765 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:08.363125 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:08.687862 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:08.863556 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:09.180695 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:09.366709 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:09.682117 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:09.864250 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:10.180338 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 22:19:10.365083 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:10.680467 1135488 kapi.go:107] duration metric: took 1m29.00403785s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 22:19:10.688343 1135488 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-789752 cluster.
	I1027 22:19:10.693341 1135488 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 22:19:10.697401 1135488 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 22:19:10.864823 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:11.364022 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:11.864330 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:12.363458 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:12.863574 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:13.363674 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:13.887007 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:14.364137 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:14.863895 1135488 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 22:19:15.364049 1135488 kapi.go:107] duration metric: took 1m37.004224387s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 22:19:15.367313 1135488 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, ingress-dns, storage-provisioner, nvidia-device-plugin, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 22:19:15.370455 1135488 addons.go:514] duration metric: took 1m43.535683543s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin ingress-dns storage-provisioner nvidia-device-plugin registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 22:19:15.370508 1135488 start.go:247] waiting for cluster config update ...
	I1027 22:19:15.370530 1135488 start.go:256] writing updated cluster config ...
	I1027 22:19:15.370841 1135488 ssh_runner.go:195] Run: rm -f paused
	I1027 22:19:15.375690 1135488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:19:15.379383 1135488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5586j" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.385219 1135488 pod_ready.go:94] pod "coredns-66bc5c9577-5586j" is "Ready"
	I1027 22:19:15.385250 1135488 pod_ready.go:86] duration metric: took 5.83917ms for pod "coredns-66bc5c9577-5586j" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.464770 1135488 pod_ready.go:83] waiting for pod "etcd-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.469695 1135488 pod_ready.go:94] pod "etcd-addons-789752" is "Ready"
	I1027 22:19:15.469723 1135488 pod_ready.go:86] duration metric: took 4.926985ms for pod "etcd-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.472297 1135488 pod_ready.go:83] waiting for pod "kube-apiserver-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.477163 1135488 pod_ready.go:94] pod "kube-apiserver-addons-789752" is "Ready"
	I1027 22:19:15.477191 1135488 pod_ready.go:86] duration metric: took 4.865955ms for pod "kube-apiserver-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.479717 1135488 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.780158 1135488 pod_ready.go:94] pod "kube-controller-manager-addons-789752" is "Ready"
	I1027 22:19:15.780187 1135488 pod_ready.go:86] duration metric: took 300.440656ms for pod "kube-controller-manager-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:15.979670 1135488 pod_ready.go:83] waiting for pod "kube-proxy-d6r65" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.379865 1135488 pod_ready.go:94] pod "kube-proxy-d6r65" is "Ready"
	I1027 22:19:16.379892 1135488 pod_ready.go:86] duration metric: took 400.191995ms for pod "kube-proxy-d6r65" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.583578 1135488 pod_ready.go:83] waiting for pod "kube-scheduler-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.981773 1135488 pod_ready.go:94] pod "kube-scheduler-addons-789752" is "Ready"
	I1027 22:19:16.981803 1135488 pod_ready.go:86] duration metric: took 398.19465ms for pod "kube-scheduler-addons-789752" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:19:16.981816 1135488 pod_ready.go:40] duration metric: took 1.606088683s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:19:17.046304 1135488 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 22:19:17.050035 1135488 out.go:179] * Done! kubectl is now configured to use "addons-789752" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:19:46 addons-789752 crio[831]: time="2025-10-27T22:19:46.8383104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:19:46 addons-789752 crio[831]: time="2025-10-27T22:19:46.838919984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:19:46 addons-789752 crio[831]: time="2025-10-27T22:19:46.856700981Z" level=info msg="Created container 7db8250237d57b225a35f25bd38706214bc8655c939cd9c541acd40253940f79: default/test-local-path/busybox" id=520d5232-de65-4322-8334-44c4befddee5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:19:46 addons-789752 crio[831]: time="2025-10-27T22:19:46.857831926Z" level=info msg="Starting container: 7db8250237d57b225a35f25bd38706214bc8655c939cd9c541acd40253940f79" id=ccdfc7e8-fedb-4196-b965-e19b261c8e13 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:19:46 addons-789752 crio[831]: time="2025-10-27T22:19:46.859618626Z" level=info msg="Started container" PID=5419 containerID=7db8250237d57b225a35f25bd38706214bc8655c939cd9c541acd40253940f79 description=default/test-local-path/busybox id=ccdfc7e8-fedb-4196-b965-e19b261c8e13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f01cc1a13147b3c00f64a1cbda790d1894d3d5d98e7403c19c8ba6ef87673e20
	Oct 27 22:19:48 addons-789752 crio[831]: time="2025-10-27T22:19:48.401786624Z" level=info msg="Stopping pod sandbox: f01cc1a13147b3c00f64a1cbda790d1894d3d5d98e7403c19c8ba6ef87673e20" id=5a46568d-7372-49b5-b315-b4617960c67a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:19:48 addons-789752 crio[831]: time="2025-10-27T22:19:48.402093803Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:f01cc1a13147b3c00f64a1cbda790d1894d3d5d98e7403c19c8ba6ef87673e20 UID:8a98c85b-ffc0-4bbb-b3f4-04bec34d9867 NetNS:/var/run/netns/f2f29a97-618d-4ab3-87e7-b06952572e91 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ecd0}] Aliases:map[]}"
	Oct 27 22:19:48 addons-789752 crio[831]: time="2025-10-27T22:19:48.40223669Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 27 22:19:48 addons-789752 crio[831]: time="2025-10-27T22:19:48.432452911Z" level=info msg="Stopped pod sandbox: f01cc1a13147b3c00f64a1cbda790d1894d3d5d98e7403c19c8ba6ef87673e20" id=5a46568d-7372-49b5-b315-b4617960c67a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.032151627Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5/POD" id=424774dc-30b6-409f-a971-3490060cbd90 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.032230184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.041731206Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5 Namespace:local-path-storage ID:e5dec57aed787c8cc0db0b2a43e78c24fe2ac6411ee6dbfe7d3c618692239952 UID:512a82e7-b312-4137-b5db-f7fa7264c299 NetNS:/var/run/netns/032aa988-f32d-4215-a3f2-5750eecee8fd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b60cd0}] Aliases:map[]}"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.041776589Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5 to CNI network \"kindnet\" (type=ptp)"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.053980877Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5 Namespace:local-path-storage ID:e5dec57aed787c8cc0db0b2a43e78c24fe2ac6411ee6dbfe7d3c618692239952 UID:512a82e7-b312-4137-b5db-f7fa7264c299 NetNS:/var/run/netns/032aa988-f32d-4215-a3f2-5750eecee8fd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b60cd0}] Aliases:map[]}"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.054153712Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5 for CNI network kindnet (type=ptp)"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.058044318Z" level=info msg="Ran pod sandbox e5dec57aed787c8cc0db0b2a43e78c24fe2ac6411ee6dbfe7d3c618692239952 with infra container: local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5/POD" id=424774dc-30b6-409f-a971-3490060cbd90 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.059321012Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e3a14df0-4029-490f-9505-41e741d01ee8 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.068291598Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=412af471-b51f-4797-84d2-9bf3467eb2bf name=/runtime.v1.ImageService/ImageStatus
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.076389105Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5/helper-pod" id=73ab1937-5828-4aa3-9a72-7bd7adaf276a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.076542174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.085579706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.086182767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.119580071Z" level=info msg="Created container da9eb2d7ebb489e14d2f786f43b9024d7c100734f117792dd9ab981358e4638f: local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5/helper-pod" id=73ab1937-5828-4aa3-9a72-7bd7adaf276a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.129379115Z" level=info msg="Starting container: da9eb2d7ebb489e14d2f786f43b9024d7c100734f117792dd9ab981358e4638f" id=ee688ca0-f10f-4880-a893-d0822c8bc109 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 22:19:50 addons-789752 crio[831]: time="2025-10-27T22:19:50.142201898Z" level=info msg="Started container" PID=5539 containerID=da9eb2d7ebb489e14d2f786f43b9024d7c100734f117792dd9ab981358e4638f description=local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5/helper-pod id=ee688ca0-f10f-4880-a893-d0822c8bc109 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e5dec57aed787c8cc0db0b2a43e78c24fe2ac6411ee6dbfe7d3c618692239952
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	da9eb2d7ebb48       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   e5dec57aed787       helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5   local-path-storage
	7db8250237d57       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago        Exited              busybox                                  0                   f01cc1a13147b       test-local-path                                              default
	85974f054a4f1       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   8abb53f14e52c       helper-pod-create-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5   local-path-storage
	9da68778485a7       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   305fd67e37bf9       registry-test                                                default
	0c81d6f75b203       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   d09d58c507f1a       busybox                                                      default
	75710d7cc5263       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          36 seconds ago       Running             csi-snapshotter                          0                   4875b9d71c445       csi-hostpathplugin-lrbhx                                     kube-system
	ba4375e556d33       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          37 seconds ago       Running             csi-provisioner                          0                   4875b9d71c445       csi-hostpathplugin-lrbhx                                     kube-system
	6360be647f550       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            39 seconds ago       Running             liveness-probe                           0                   4875b9d71c445       csi-hostpathplugin-lrbhx                                     kube-system
	718db41ae0e01       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           40 seconds ago       Running             hostpath                                 0                   4875b9d71c445       csi-hostpathplugin-lrbhx                                     kube-system
	195417cf0328a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 41 seconds ago       Running             gcp-auth                                 0                   4815d0e4143b2       gcp-auth-78565c9fb4-f79xb                                    gcp-auth
	8893e0b4f4c31       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             45 seconds ago       Running             controller                               0                   77c82edbeee86       ingress-nginx-controller-675c5ddd98-spjc8                    ingress-nginx
	5d5039ffe6c51       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            52 seconds ago       Running             gadget                                   0                   cd9437be6d469       gadget-zrlpj                                                 gadget
	fa9874677b5b6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                55 seconds ago       Running             node-driver-registrar                    0                   4875b9d71c445       csi-hostpathplugin-lrbhx                                     kube-system
	80a5e9b22352d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             57 seconds ago       Running             local-path-provisioner                   0                   135171b8ec080       local-path-provisioner-648f6765c9-zlzmv                      local-path-storage
	8503dbcc9a80b       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              58 seconds ago       Running             yakd                                     0                   efada02867106       yakd-dashboard-5ff678cb9-qpqkf                               yakd-dashboard
	e49247d0ffa77       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   c29787ec0980d       kube-ingress-dns-minikube                                    kube-system
	4568c459c0fc3       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    1                   5e7f3ed726e9e       ingress-nginx-admission-patch-4f5h7                          ingress-nginx
	bc0685b478e5c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   96b6b40a1aea5       ingress-nginx-admission-create-gcl8s                         ingress-nginx
	2a94fd6377a97       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   5ac4f79ac0376       nvidia-device-plugin-daemonset-7xjnb                         kube-system
	1891841b92bc2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   4875b9d71c445       csi-hostpathplugin-lrbhx                                     kube-system
	364352eda0536       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   e56fe2f16f3df       csi-hostpath-resizer-0                                       kube-system
	2b141a747edd8       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   3e5c304f5bf37       snapshot-controller-7d9fbc56b8-vxkd6                         kube-system
	2e03207b4b26e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   90e4f26d65f1b       csi-hostpath-attacher-0                                      kube-system
	c89583e34b204       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f93b1fc390c5d       snapshot-controller-7d9fbc56b8-dz2cc                         kube-system
	9265cc16ebe00       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   c8c6034187aca       registry-proxy-pxgxr                                         kube-system
	9872fee8e1cf9       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   1939f6fa1378e       registry-6b586f9694-vw4fc                                    kube-system
	3c9c0fd6e6096       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   49a610d7de707       metrics-server-85b7d694d7-8kfjg                              kube-system
	5486ddf3fbb47       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   40db2c368ef9a       cloud-spanner-emulator-86bd5cbb97-t6sc8                      default
	f712dddd4573d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   30fb4e3647d6c       storage-provisioner                                          kube-system
	a7d75dad24853       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   e044bc239f060       coredns-66bc5c9577-5586j                                     kube-system
	bcef984a34b58       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   41514d935473d       kube-proxy-d6r65                                             kube-system
	a6c04b76522e4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   919d6abb14fd9       kindnet-kn5mv                                                kube-system
	f412d82dffe40       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   38e22bf0aeed7       kube-scheduler-addons-789752                                 kube-system
	ed5258f512747       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   5d6e656f9e772       kube-apiserver-addons-789752                                 kube-system
	b57e96f12e54c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   33d510217a750       kube-controller-manager-addons-789752                        kube-system
	732fddf2b32de       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   8877489734e9d       etcd-addons-789752                                           kube-system
	
	
	==> coredns [a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a] <==
	[INFO] 10.244.0.6:37318 - 49054 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002097087s
	[INFO] 10.244.0.6:37318 - 29486 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124654s
	[INFO] 10.244.0.6:37318 - 31037 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000087682s
	[INFO] 10.244.0.6:42690 - 52163 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155375s
	[INFO] 10.244.0.6:42690 - 51950 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089651s
	[INFO] 10.244.0.6:58048 - 49852 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008577s
	[INFO] 10.244.0.6:58048 - 49638 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089019s
	[INFO] 10.244.0.6:60493 - 3099 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144823s
	[INFO] 10.244.0.6:60493 - 2926 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00016385s
	[INFO] 10.244.0.6:33171 - 42126 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001403031s
	[INFO] 10.244.0.6:33171 - 41921 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001476591s
	[INFO] 10.244.0.6:56232 - 57673 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112338s
	[INFO] 10.244.0.6:56232 - 57517 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000151411s
	[INFO] 10.244.0.20:39574 - 64264 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197237s
	[INFO] 10.244.0.20:44757 - 39757 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000089183s
	[INFO] 10.244.0.20:57002 - 52227 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140491s
	[INFO] 10.244.0.20:57604 - 47439 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124572s
	[INFO] 10.244.0.20:43303 - 3576 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168889s
	[INFO] 10.244.0.20:38103 - 55645 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000266819s
	[INFO] 10.244.0.20:58291 - 31516 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002022846s
	[INFO] 10.244.0.20:55413 - 59573 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001893688s
	[INFO] 10.244.0.20:50236 - 43393 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004214212s
	[INFO] 10.244.0.20:39562 - 49483 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004900146s
	[INFO] 10.244.0.23:40531 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000208979s
	[INFO] 10.244.0.23:49011 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000301379s
	
	
	==> describe nodes <==
	Name:               addons-789752
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-789752
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=addons-789752
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_17_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-789752
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-789752"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:17:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-789752
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:19:39 +0000   Mon, 27 Oct 2025 22:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:19:39 +0000   Mon, 27 Oct 2025 22:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:19:39 +0000   Mon, 27 Oct 2025 22:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:19:39 +0000   Mon, 27 Oct 2025 22:18:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-789752
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                91d5fd2b-6f16-45b2-ae26-1abf741d55ae
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-86bd5cbb97-t6sc8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  gadget                      gadget-zrlpj                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  gcp-auth                    gcp-auth-78565c9fb4-f79xb                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-spjc8                     100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m14s
	  kube-system                 coredns-66bc5c9577-5586j                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 csi-hostpathplugin-lrbhx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 etcd-addons-789752                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-kn5mv                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-addons-789752                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-addons-789752                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-d6r65                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-addons-789752                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 metrics-server-85b7d694d7-8kfjg                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m15s
	  kube-system                 nvidia-device-plugin-daemonset-7xjnb                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 registry-6b586f9694-vw4fc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 registry-creds-764b6fb674-ldrtc                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 registry-proxy-pxgxr                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 snapshot-controller-7d9fbc56b8-dz2cc                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-vxkd6                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  local-path-storage          helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-zlzmv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qpqkf                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m18s  kube-proxy       
	  Normal   Starting                 2m25s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s  kubelet          Node addons-789752 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s  kubelet          Node addons-789752 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s  kubelet          Node addons-789752 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m21s  node-controller  Node addons-789752 event: Registered Node addons-789752 in Controller
	  Normal   NodeReady                98s    kubelet          Node addons-789752 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct27 20:54] overlayfs: idmapped layers are currently not supported
	[Oct27 20:56] overlayfs: idmapped layers are currently not supported
	[Oct27 20:57] overlayfs: idmapped layers are currently not supported
	[Oct27 20:58] overlayfs: idmapped layers are currently not supported
	[ +22.437501] overlayfs: idmapped layers are currently not supported
	[Oct27 20:59] overlayfs: idmapped layers are currently not supported
	[Oct27 21:00] overlayfs: idmapped layers are currently not supported
	[Oct27 21:01] overlayfs: idmapped layers are currently not supported
	[Oct27 21:02] overlayfs: idmapped layers are currently not supported
	[Oct27 21:03] overlayfs: idmapped layers are currently not supported
	[ +50.457876] overlayfs: idmapped layers are currently not supported
	[Oct27 21:04] overlayfs: idmapped layers are currently not supported
	[Oct27 21:05] overlayfs: idmapped layers are currently not supported
	[ +28.375154] overlayfs: idmapped layers are currently not supported
	[Oct27 21:06] overlayfs: idmapped layers are currently not supported
	[ +27.785336] overlayfs: idmapped layers are currently not supported
	[Oct27 21:07] overlayfs: idmapped layers are currently not supported
	[Oct27 21:08] overlayfs: idmapped layers are currently not supported
	[Oct27 21:09] overlayfs: idmapped layers are currently not supported
	[Oct27 21:10] overlayfs: idmapped layers are currently not supported
	[Oct27 21:11] overlayfs: idmapped layers are currently not supported
	[Oct27 21:12] overlayfs: idmapped layers are currently not supported
	[Oct27 21:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 22:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 22:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad] <==
	{"level":"warn","ts":"2025-10-27T22:17:22.316344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.351211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.395656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.415998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.446589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.474421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.503010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.543022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.566821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.603592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.614003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.630651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.668281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.680528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.703826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.742680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.756312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.775357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:22.863952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:38.719641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:17:38.745074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.638003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.658205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.683735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:18:00.712430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59780","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [195417cf0328af7821666ec831de0c1018572e8d4acab93ac2544ca2c822ce70] <==
	2025/10/27 22:19:09 GCP Auth Webhook started!
	2025/10/27 22:19:17 Ready to marshal response ...
	2025/10/27 22:19:17 Ready to write response ...
	2025/10/27 22:19:17 Ready to marshal response ...
	2025/10/27 22:19:17 Ready to write response ...
	2025/10/27 22:19:18 Ready to marshal response ...
	2025/10/27 22:19:18 Ready to write response ...
	2025/10/27 22:19:39 Ready to marshal response ...
	2025/10/27 22:19:39 Ready to write response ...
	2025/10/27 22:19:40 Ready to marshal response ...
	2025/10/27 22:19:40 Ready to write response ...
	2025/10/27 22:19:40 Ready to marshal response ...
	2025/10/27 22:19:40 Ready to write response ...
	2025/10/27 22:19:49 Ready to marshal response ...
	2025/10/27 22:19:49 Ready to write response ...
	
	
	==> kernel <==
	 22:19:51 up  5:02,  0 user,  load average: 2.76, 3.66, 3.90
	Linux addons-789752 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32] <==
	E1027 22:18:02.630123       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 22:18:02.630207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 22:18:04.229432       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:18:04.229462       1 metrics.go:72] Registering metrics
	I1027 22:18:04.229529       1 controller.go:711] "Syncing nftables rules"
	I1027 22:18:12.626456       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:18:12.626512       1 main.go:301] handling current node
	I1027 22:18:22.626126       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:18:22.626189       1 main.go:301] handling current node
	I1027 22:18:32.624950       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:18:32.625011       1 main.go:301] handling current node
	I1027 22:18:42.625646       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:18:42.625696       1 main.go:301] handling current node
	I1027 22:18:52.625988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:18:52.626024       1 main.go:301] handling current node
	I1027 22:19:02.626454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:19:02.626491       1 main.go:301] handling current node
	I1027 22:19:12.625993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:19:12.626028       1 main.go:301] handling current node
	I1027 22:19:22.625177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:19:22.625291       1 main.go:301] handling current node
	I1027 22:19:32.627518       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:19:32.627632       1 main.go:301] handling current node
	I1027 22:19:42.625391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:19:42.625441       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db] <==
	E1027 22:18:22.960126       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 22:18:22.962106       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:22.962785       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:22.968157       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:22.989206       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.030337       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.111616       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.272594       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.594274       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	E1027 22:18:23.641082       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.209.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.209.22:443: connect: connection refused" logger="UnhandledError"
	W1027 22:18:23.961260       1 handler_proxy.go:99] no RequestInfo found in the context
	W1027 22:18:23.961265       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 22:18:23.961438       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1027 22:18:23.961457       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1027 22:18:23.961515       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1027 22:18:23.962686       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1027 22:18:24.345267       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 22:19:28.456497       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52830: use of closed network connection
	E1027 22:19:28.588963       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:52856: use of closed network connection
	
	
	==> kube-controller-manager [b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948] <==
	I1027 22:17:30.626136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:17:30.629086       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-789752" podCIDRs=["10.244.0.0/24"]
	I1027 22:17:30.631006       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 22:17:30.643533       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:17:30.643547       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:17:30.647781       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 22:17:30.658218       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:17:30.660758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:17:30.660789       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:17:30.660797       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 22:17:30.660757       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 22:17:30.661886       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:17:30.662095       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 22:17:30.663445       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:17:30.666826       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:17:30.669207       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1027 22:17:36.678778       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1027 22:18:00.631029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 22:18:00.631190       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 22:18:00.631234       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 22:18:00.657090       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 22:18:00.662245       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 22:18:00.731796       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:18:00.762962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:18:15.618511       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6] <==
	I1027 22:17:32.657406       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:17:32.800626       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:17:32.901667       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:17:32.901742       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 22:17:32.901840       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:17:32.950126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:17:32.950188       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:17:32.962060       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:17:32.962766       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:17:32.962781       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:17:32.964030       1 config.go:200] "Starting service config controller"
	I1027 22:17:32.964038       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:17:32.964055       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:17:32.964059       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:17:32.964071       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:17:32.964076       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:17:32.964680       1 config.go:309] "Starting node config controller"
	I1027 22:17:32.964686       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:17:32.964692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:17:33.064957       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:17:33.064995       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:17:33.065036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160] <==
	E1027 22:17:23.690623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:17:23.690691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:17:23.690748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 22:17:23.690814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 22:17:23.690874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:17:23.690935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 22:17:23.690997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:17:23.691061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 22:17:23.691122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 22:17:23.691180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 22:17:23.691237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 22:17:23.691300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:17:23.691354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 22:17:23.691411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 22:17:23.691556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 22:17:23.691577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:17:24.527357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 22:17:24.583540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 22:17:24.621078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 22:17:24.672179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 22:17:24.761667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 22:17:24.785098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 22:17:24.819053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 22:17:24.927492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1027 22:17:27.878857       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:19:48 addons-789752 kubelet[1315]: I1027 22:19:48.576231    1315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a98c85b-ffc0-4bbb-b3f4-04bec34d9867-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5" (OuterVolumeSpecName: "data") pod "8a98c85b-ffc0-4bbb-b3f4-04bec34d9867" (UID: "8a98c85b-ffc0-4bbb-b3f4-04bec34d9867"). InnerVolumeSpecName "pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 22:19:48 addons-789752 kubelet[1315]: I1027 22:19:48.576419    1315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a98c85b-ffc0-4bbb-b3f4-04bec34d9867-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8a98c85b-ffc0-4bbb-b3f4-04bec34d9867" (UID: "8a98c85b-ffc0-4bbb-b3f4-04bec34d9867"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 22:19:48 addons-789752 kubelet[1315]: I1027 22:19:48.580464    1315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a98c85b-ffc0-4bbb-b3f4-04bec34d9867-kube-api-access-d8zrd" (OuterVolumeSpecName: "kube-api-access-d8zrd") pod "8a98c85b-ffc0-4bbb-b3f4-04bec34d9867" (UID: "8a98c85b-ffc0-4bbb-b3f4-04bec34d9867"). InnerVolumeSpecName "kube-api-access-d8zrd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 22:19:48 addons-789752 kubelet[1315]: I1027 22:19:48.678056    1315 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8a98c85b-ffc0-4bbb-b3f4-04bec34d9867-gcp-creds\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:19:48 addons-789752 kubelet[1315]: I1027 22:19:48.678842    1315 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d8zrd\" (UniqueName: \"kubernetes.io/projected/8a98c85b-ffc0-4bbb-b3f4-04bec34d9867-kube-api-access-d8zrd\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:19:48 addons-789752 kubelet[1315]: I1027 22:19:48.678961    1315 reconciler_common.go:299] "Volume detached for volume \"pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5\" (UniqueName: \"kubernetes.io/host-path/8a98c85b-ffc0-4bbb-b3f4-04bec34d9867-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:19:49 addons-789752 kubelet[1315]: I1027 22:19:49.407271    1315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f01cc1a13147b3c00f64a1cbda790d1894d3d5d98e7403c19c8ba6ef87673e20"
	Oct 27 22:19:49 addons-789752 kubelet[1315]: I1027 22:19:49.789755    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-data\") pod \"helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") " pod="local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5"
	Oct 27 22:19:49 addons-789752 kubelet[1315]: I1027 22:19:49.789839    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-gcp-creds\") pod \"helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") " pod="local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5"
	Oct 27 22:19:49 addons-789752 kubelet[1315]: I1027 22:19:49.789872    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29vv5\" (UniqueName: \"kubernetes.io/projected/512a82e7-b312-4137-b5db-f7fa7264c299-kube-api-access-29vv5\") pod \"helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") " pod="local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5"
	Oct 27 22:19:49 addons-789752 kubelet[1315]: I1027 22:19:49.789910    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/512a82e7-b312-4137-b5db-f7fa7264c299-script\") pod \"helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") " pod="local-path-storage/helper-pod-delete-pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5"
	Oct 27 22:19:50 addons-789752 kubelet[1315]: W1027 22:19:50.056269    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a652b6a668fc097b87ba64479bb60d0fa96fd8202cb54c1c465cda9d5582703e/crio-e5dec57aed787c8cc0db0b2a43e78c24fe2ac6411ee6dbfe7d3c618692239952 WatchSource:0}: Error finding container e5dec57aed787c8cc0db0b2a43e78c24fe2ac6411ee6dbfe7d3c618692239952: Status 404 returned error can't find the container with id e5dec57aed787c8cc0db0b2a43e78c24fe2ac6411ee6dbfe7d3c618692239952
	Oct 27 22:19:50 addons-789752 kubelet[1315]: I1027 22:19:50.575073    1315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a98c85b-ffc0-4bbb-b3f4-04bec34d9867" path="/var/lib/kubelet/pods/8a98c85b-ffc0-4bbb-b3f4-04bec34d9867/volumes"
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.608649    1315 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/512a82e7-b312-4137-b5db-f7fa7264c299-script\") pod \"512a82e7-b312-4137-b5db-f7fa7264c299\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") "
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.608699    1315 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-gcp-creds\") pod \"512a82e7-b312-4137-b5db-f7fa7264c299\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") "
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.608734    1315 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-data\") pod \"512a82e7-b312-4137-b5db-f7fa7264c299\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") "
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.608767    1315 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29vv5\" (UniqueName: \"kubernetes.io/projected/512a82e7-b312-4137-b5db-f7fa7264c299-kube-api-access-29vv5\") pod \"512a82e7-b312-4137-b5db-f7fa7264c299\" (UID: \"512a82e7-b312-4137-b5db-f7fa7264c299\") "
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.610178    1315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "512a82e7-b312-4137-b5db-f7fa7264c299" (UID: "512a82e7-b312-4137-b5db-f7fa7264c299"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.611787    1315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512a82e7-b312-4137-b5db-f7fa7264c299-script" (OuterVolumeSpecName: "script") pod "512a82e7-b312-4137-b5db-f7fa7264c299" (UID: "512a82e7-b312-4137-b5db-f7fa7264c299"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.611842    1315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-data" (OuterVolumeSpecName: "data") pod "512a82e7-b312-4137-b5db-f7fa7264c299" (UID: "512a82e7-b312-4137-b5db-f7fa7264c299"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.611838    1315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512a82e7-b312-4137-b5db-f7fa7264c299-kube-api-access-29vv5" (OuterVolumeSpecName: "kube-api-access-29vv5") pod "512a82e7-b312-4137-b5db-f7fa7264c299" (UID: "512a82e7-b312-4137-b5db-f7fa7264c299"). InnerVolumeSpecName "kube-api-access-29vv5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.710480    1315 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-data\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.710521    1315 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-29vv5\" (UniqueName: \"kubernetes.io/projected/512a82e7-b312-4137-b5db-f7fa7264c299-kube-api-access-29vv5\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.710536    1315 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/512a82e7-b312-4137-b5db-f7fa7264c299-script\") on node \"addons-789752\" DevicePath \"\""
	Oct 27 22:19:51 addons-789752 kubelet[1315]: I1027 22:19:51.710545    1315 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/512a82e7-b312-4137-b5db-f7fa7264c299-gcp-creds\") on node \"addons-789752\" DevicePath \"\""
	
	
	==> storage-provisioner [f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813] <==
	W1027 22:19:26.787415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:28.790562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:28.795140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:30.798944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:30.803497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:32.806888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:32.813588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:34.817395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:34.821825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:36.825047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:36.832163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:38.835793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:38.844220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:40.848737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:40.858790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:42.861475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:42.866563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:44.870071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:44.879151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:46.882975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:46.891301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:48.895140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:48.901009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:50.904232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:19:50.909367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-789752 -n addons-789752
helpers_test.go:269: (dbg) Run:  kubectl --context addons-789752 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7 registry-creds-764b6fb674-ldrtc
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-789752 describe pod ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7 registry-creds-764b6fb674-ldrtc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-789752 describe pod ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7 registry-creds-764b6fb674-ldrtc: exit status 1 (81.48862ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gcl8s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4f5h7" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ldrtc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-789752 describe pod ingress-nginx-admission-create-gcl8s ingress-nginx-admission-patch-4f5h7 registry-creds-764b6fb674-ldrtc: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable headlamp --alsologtostderr -v=1: exit status 11 (270.317354ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:52.852350 1142861 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:52.853370 1142861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:52.853424 1142861 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:52.853445 1142861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:52.853737 1142861 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:52.854081 1142861 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:52.854540 1142861 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:52.854595 1142861 addons.go:606] checking whether the cluster is paused
	I1027 22:19:52.854732 1142861 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:52.854769 1142861 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:52.855272 1142861 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:52.872569 1142861 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:52.872624 1142861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:52.896847 1142861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:53.000869 1142861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:53.000974 1142861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:53.033470 1142861 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:53.033494 1142861 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:53.033504 1142861 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:53.033508 1142861 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:53.033513 1142861 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:53.033517 1142861 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:53.033520 1142861 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:53.033524 1142861 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:53.033527 1142861 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:53.033537 1142861 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:53.033544 1142861 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:53.033550 1142861 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:53.033558 1142861 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:53.033562 1142861 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:53.033565 1142861 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:53.033571 1142861 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:53.033576 1142861 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:53.033580 1142861 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:53.033584 1142861 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:53.033587 1142861 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:53.033591 1142861 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:53.033601 1142861 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:53.033604 1142861 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:53.033607 1142861 cri.go:89] found id: ""
	I1027 22:19:53.033656 1142861 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:53.051483 1142861 out.go:203] 
	W1027 22:19:53.055179 1142861 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:53.055210 1142861 out.go:285] * 
	* 
	W1027 22:19:53.063987 1142861 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:53.067469 1142861 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-t6sc8" [ac2c4789-f093-4c0a-b217-2efa74964d6c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006420698s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (328.057427ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:49.041906 1142161 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:49.043877 1142161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:49.043942 1142161 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:49.043963 1142161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:49.044316 1142161 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:49.044674 1142161 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:49.045116 1142161 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:49.045160 1142161 addons.go:606] checking whether the cluster is paused
	I1027 22:19:49.045308 1142161 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:49.045340 1142161 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:49.045836 1142161 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:49.074124 1142161 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:49.074187 1142161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:49.096811 1142161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:49.208667 1142161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:49.208742 1142161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:49.248595 1142161 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:49.248620 1142161 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:49.248625 1142161 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:49.248639 1142161 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:49.248647 1142161 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:49.248651 1142161 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:49.248655 1142161 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:49.248658 1142161 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:49.248662 1142161 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:49.248668 1142161 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:49.248671 1142161 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:49.248675 1142161 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:49.248679 1142161 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:49.248682 1142161 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:49.248685 1142161 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:49.248690 1142161 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:49.248693 1142161 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:49.248698 1142161 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:49.248702 1142161 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:49.248705 1142161 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:49.248711 1142161 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:49.248719 1142161 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:49.248725 1142161 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:49.248729 1142161 cri.go:89] found id: ""
	I1027 22:19:49.248784 1142161 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:49.269374 1142161 out.go:203] 
	W1027 22:19:49.272989 1142161 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:49.273040 1142161 out.go:285] * 
	* 
	W1027 22:19:49.283284 1142161 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:49.286858 1142161 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-789752 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-789752 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-789752 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8a98c85b-ffc0-4bbb-b3f4-04bec34d9867] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8a98c85b-ffc0-4bbb-b3f4-04bec34d9867] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8a98c85b-ffc0-4bbb-b3f4-04bec34d9867] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004036807s
addons_test.go:967: (dbg) Run:  kubectl --context addons-789752 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 ssh "cat /opt/local-path-provisioner/pvc-b66800b3-f8e9-40fb-9d4f-1b0789ca90c5_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-789752 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-789752 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (447.822122ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:49.831804 1142332 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:49.833940 1142332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:49.833958 1142332 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:49.833964 1142332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:49.834290 1142332 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:49.834758 1142332 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:49.835170 1142332 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:49.835201 1142332 addons.go:606] checking whether the cluster is paused
	I1027 22:19:49.835372 1142332 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:49.835390 1142332 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:49.838241 1142332 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:49.866864 1142332 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:49.866920 1142332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:49.904439 1142332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:50.037115 1142332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:50.037265 1142332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:50.112704 1142332 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:50.112774 1142332 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:50.112795 1142332 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:50.112825 1142332 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:50.112860 1142332 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:50.112902 1142332 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:50.112933 1142332 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:50.112956 1142332 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:50.112985 1142332 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:50.113015 1142332 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:50.113037 1142332 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:50.113065 1142332 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:50.113087 1142332 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:50.113108 1142332 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:50.113129 1142332 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:50.113163 1142332 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:50.113196 1142332 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:50.113221 1142332 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:50.113242 1142332 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:50.113261 1142332 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:50.113291 1142332 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:50.113314 1142332 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:50.113333 1142332 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:50.113351 1142332 cri.go:89] found id: ""
	I1027 22:19:50.113422 1142332 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:50.158359 1142332 out.go:203] 
	W1027 22:19:50.161858 1142332 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:50.161948 1142332 out.go:285] * 
	* 
	W1027 22:19:50.179952 1142332 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:50.183073 1142332 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7xjnb" [d25c58e2-5389-4ef7-bdb1-7f57a029a00b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004166896s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (268.526025ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:40.209344 1141811 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:40.210188 1141811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:40.210231 1141811 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:40.210252 1141811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:40.210619 1141811 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:40.210966 1141811 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:40.211366 1141811 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:40.211398 1141811 addons.go:606] checking whether the cluster is paused
	I1027 22:19:40.211518 1141811 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:40.211557 1141811 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:40.212113 1141811 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:40.231670 1141811 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:40.231729 1141811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:40.249754 1141811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:40.353207 1141811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:40.353323 1141811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:40.383165 1141811 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:40.383189 1141811 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:40.383194 1141811 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:40.383207 1141811 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:40.383211 1141811 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:40.383234 1141811 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:40.383247 1141811 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:40.383250 1141811 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:40.383253 1141811 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:40.383260 1141811 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:40.383263 1141811 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:40.383266 1141811 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:40.383269 1141811 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:40.383273 1141811 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:40.383276 1141811 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:40.383281 1141811 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:40.383288 1141811 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:40.383292 1141811 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:40.383295 1141811 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:40.383319 1141811 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:40.383325 1141811 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:40.383344 1141811 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:40.383347 1141811 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:40.383351 1141811 cri.go:89] found id: ""
	I1027 22:19:40.383420 1141811 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:40.399112 1141811 out.go:203] 
	W1027 22:19:40.402272 1141811 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:40.402301 1141811 out.go:285] * 
	* 
	W1027 22:19:40.411203 1141811 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:40.414209 1141811 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-qpqkf" [0ae2582e-9c47-4168-8f0c-3560d36b02c1] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003567569s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-789752 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-789752 addons disable yakd --alsologtostderr -v=1: exit status 11 (271.368247ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:19:34.926863 1141720 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:19:34.927732 1141720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:34.927746 1141720 out.go:374] Setting ErrFile to fd 2...
	I1027 22:19:34.927752 1141720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:19:34.928030 1141720 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:19:34.928404 1141720 mustload.go:66] Loading cluster: addons-789752
	I1027 22:19:34.928798 1141720 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:34.928818 1141720 addons.go:606] checking whether the cluster is paused
	I1027 22:19:34.928926 1141720 config.go:182] Loaded profile config "addons-789752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:19:34.928941 1141720 host.go:66] Checking if "addons-789752" exists ...
	I1027 22:19:34.929582 1141720 cli_runner.go:164] Run: docker container inspect addons-789752 --format={{.State.Status}}
	I1027 22:19:34.947992 1141720 ssh_runner.go:195] Run: systemctl --version
	I1027 22:19:34.948063 1141720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-789752
	I1027 22:19:34.965805 1141720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34244 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/addons-789752/id_rsa Username:docker}
	I1027 22:19:35.073466 1141720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:19:35.073552 1141720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:19:35.108788 1141720 cri.go:89] found id: "75710d7cc526305b5d44527c3948f7660d0f11c9bb988fea4cc50adb7f70c4b0"
	I1027 22:19:35.108815 1141720 cri.go:89] found id: "ba4375e556d33ee6fe2adbb573ec62057326c21efd49a2ca6746e0e867dca0eb"
	I1027 22:19:35.108820 1141720 cri.go:89] found id: "6360be647f550637a0e7e58311ce8090659f094e7d1fdaace5aa6c9b9e1084ff"
	I1027 22:19:35.108831 1141720 cri.go:89] found id: "718db41ae0e017a0def85acbf7b9a58c43c4917bcde880c3ec1dad468aaa3ad0"
	I1027 22:19:35.108835 1141720 cri.go:89] found id: "fa9874677b5b67f09e92a81d9823e4f1e082a4821a07ab9244b51921cf04483a"
	I1027 22:19:35.108839 1141720 cri.go:89] found id: "e49247d0ffa77a129b4b9b98634538344f523f40499e976caa9a86569158b66d"
	I1027 22:19:35.108843 1141720 cri.go:89] found id: "2a94fd6377a9793dba093bc0477e41ee94cbc624b3f6a43bb885426fc9ced620"
	I1027 22:19:35.108846 1141720 cri.go:89] found id: "1891841b92bc24962a3bc53daf5b28f39360ac3c20a06fa7adc815691b905a55"
	I1027 22:19:35.108850 1141720 cri.go:89] found id: "364352eda05362968f44f25fc3f6a928413dbff5414c84001966e91d713fc3c5"
	I1027 22:19:35.108857 1141720 cri.go:89] found id: "2b141a747edd885ca1f2cb0de68d722d1172c781ee2f1dc422c402f2426b71a5"
	I1027 22:19:35.108860 1141720 cri.go:89] found id: "2e03207b4b26edc5c7672a96ced8ce7c0a8bba6d5d8054568dafe65d952af2fe"
	I1027 22:19:35.108864 1141720 cri.go:89] found id: "c89583e34b204413fbc3cae91a3c194e064a4a74af39d957e557f74a7b9c5dfc"
	I1027 22:19:35.108867 1141720 cri.go:89] found id: "9265cc16ebe00d91c78da71020aea5e78947eb97fca3558b1ee78ec3e8c7ab51"
	I1027 22:19:35.108870 1141720 cri.go:89] found id: "9872fee8e1cf948bd5e39ef7072c2312923b19b6158d32881c3f53e2068a2eba"
	I1027 22:19:35.108873 1141720 cri.go:89] found id: "3c9c0fd6e60966dd77759dd3fca479416d247d034fcaf1409c303183ab3e1ccb"
	I1027 22:19:35.108878 1141720 cri.go:89] found id: "f712dddd4573d0fe9d735c1c15c28d0975b63f01ad7343c996c9ac9e22da6813"
	I1027 22:19:35.108882 1141720 cri.go:89] found id: "a7d75dad24853dbae39098cf151dae187d4239afff3b61a9449981f10b79fd2a"
	I1027 22:19:35.108886 1141720 cri.go:89] found id: "bcef984a34b582632964a62e2ea13989b587a3a34ab7f141ca2d126c15affbb6"
	I1027 22:19:35.108889 1141720 cri.go:89] found id: "a6c04b76522e43566ec49632184d8253b7f3efdd2d549705d0bb56dcd3504b32"
	I1027 22:19:35.108892 1141720 cri.go:89] found id: "f412d82dffe403b62ba84bcc01017d9c6d04b401071fcf54955edab34af34160"
	I1027 22:19:35.108897 1141720 cri.go:89] found id: "ed5258f512747f7de544b7f8b20e30fb6309e5f6031e68aa1d93016b71da54db"
	I1027 22:19:35.108901 1141720 cri.go:89] found id: "b57e96f12e54c8af6eed4bafb19e50128bf903f3ab267cb2c3f7399260b3c948"
	I1027 22:19:35.108907 1141720 cri.go:89] found id: "732fddf2b32debfeea89e5896d571b702244927ab3040765eda956c6120fd6ad"
	I1027 22:19:35.108910 1141720 cri.go:89] found id: ""
	I1027 22:19:35.108963 1141720 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 22:19:35.124841 1141720 out.go:203] 
	W1027 22:19:35.127751 1141720 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:19:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 22:19:35.127780 1141720 out.go:285] * 
	* 
	W1027 22:19:35.136644 1141720 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 22:19:35.139573 1141720 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-789752 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-812436 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-812436 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-m6t8s" [17fe6bf7-6244-4ced-aa3d-fc843f5d69f0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-812436 -n functional-812436
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-27 22:36:24.515492758 +0000 UTC m=+1215.147697180
functional_test.go:1645: (dbg) Run:  kubectl --context functional-812436 describe po hello-node-connect-7d85dfc575-m6t8s -n default
functional_test.go:1645: (dbg) kubectl --context functional-812436 describe po hello-node-connect-7d85dfc575-m6t8s -n default:
Name:             hello-node-connect-7d85dfc575-m6t8s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-812436/192.168.49.2
Start Time:       Mon, 27 Oct 2025 22:26:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dntfq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dntfq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-m6t8s to functional-812436
Normal   Pulling    7m2s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-812436 logs hello-node-connect-7d85dfc575-m6t8s -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-812436 logs hello-node-connect-7d85dfc575-m6t8s -n default: exit status 1 (99.465016ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-m6t8s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-812436 logs hello-node-connect-7d85dfc575-m6t8s -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-812436 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-m6t8s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-812436/192.168.49.2
Start Time:       Mon, 27 Oct 2025 22:26:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dntfq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dntfq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-m6t8s to functional-812436
Normal   Pulling    7m2s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-812436 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-812436 logs -l app=hello-node-connect: exit status 1 (84.240678ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-m6t8s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-812436 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-812436 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.9.64
IPs:                      10.107.9.64
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32069/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-812436
helpers_test.go:243: (dbg) docker inspect functional-812436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0087310c92e8f4f734177816b5945f8e853def62538fdaf482f9714906f9d6ad",
	        "Created": "2025-10-27T22:23:46.107984844Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1150379,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T22:23:46.175265656Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0087310c92e8f4f734177816b5945f8e853def62538fdaf482f9714906f9d6ad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0087310c92e8f4f734177816b5945f8e853def62538fdaf482f9714906f9d6ad/hostname",
	        "HostsPath": "/var/lib/docker/containers/0087310c92e8f4f734177816b5945f8e853def62538fdaf482f9714906f9d6ad/hosts",
	        "LogPath": "/var/lib/docker/containers/0087310c92e8f4f734177816b5945f8e853def62538fdaf482f9714906f9d6ad/0087310c92e8f4f734177816b5945f8e853def62538fdaf482f9714906f9d6ad-json.log",
	        "Name": "/functional-812436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-812436:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-812436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0087310c92e8f4f734177816b5945f8e853def62538fdaf482f9714906f9d6ad",
	                "LowerDir": "/var/lib/docker/overlay2/300e94954285be6ce37cf9b520a714eec7f60e4ac6eeff852a26200e3a0cf95f-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/300e94954285be6ce37cf9b520a714eec7f60e4ac6eeff852a26200e3a0cf95f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/300e94954285be6ce37cf9b520a714eec7f60e4ac6eeff852a26200e3a0cf95f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/300e94954285be6ce37cf9b520a714eec7f60e4ac6eeff852a26200e3a0cf95f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-812436",
	                "Source": "/var/lib/docker/volumes/functional-812436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-812436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-812436",
	                "name.minikube.sigs.k8s.io": "functional-812436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d9fe609c0ba20963ea1318fc55e6866cb96887dad287ea2f4756c4055845ed9",
	            "SandboxKey": "/var/run/docker/netns/3d9fe609c0ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-812436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:82:ab:8c:dd:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afca9434e832cb0a858acfa541451c04e5817a768326eca0f4b994739911b934",
	                    "EndpointID": "9daf52fd4546560a5817a1acabb260c044174cce4e3caca8b7cc044e4c13da79",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-812436",
	                        "0087310c92e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-812436 -n functional-812436
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 logs -n 25: (1.484475091s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-812436 cache reload                                                                                            │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:25 UTC │ 27 Oct 25 22:25 UTC │
	│ ssh     │ functional-812436 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:25 UTC │ 27 Oct 25 22:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 27 Oct 25 22:25 UTC │ 27 Oct 25 22:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 27 Oct 25 22:25 UTC │ 27 Oct 25 22:25 UTC │
	│ kubectl │ functional-812436 kubectl -- --context functional-812436 get pods                                                         │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:25 UTC │ 27 Oct 25 22:25 UTC │
	│ start   │ -p functional-812436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:25 UTC │ 27 Oct 25 22:26 UTC │
	│ service │ invalid-svc -p functional-812436                                                                                          │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │                     │
	│ config  │ functional-812436 config unset cpus                                                                                       │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ config  │ functional-812436 config get cpus                                                                                         │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │                     │
	│ config  │ functional-812436 config set cpus 2                                                                                       │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ config  │ functional-812436 config get cpus                                                                                         │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ config  │ functional-812436 config unset cpus                                                                                       │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ ssh     │ functional-812436 ssh -n functional-812436 sudo cat /home/docker/cp-test.txt                                              │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ config  │ functional-812436 config get cpus                                                                                         │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │                     │
	│ ssh     │ functional-812436 ssh echo hello                                                                                          │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ cp      │ functional-812436 cp functional-812436:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd457839162/001/cp-test.txt │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ ssh     │ functional-812436 ssh cat /etc/hostname                                                                                   │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ ssh     │ functional-812436 ssh -n functional-812436 sudo cat /home/docker/cp-test.txt                                              │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ tunnel  │ functional-812436 tunnel --alsologtostderr                                                                                │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │                     │
	│ tunnel  │ functional-812436 tunnel --alsologtostderr                                                                                │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │                     │
	│ cp      │ functional-812436 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ ssh     │ functional-812436 ssh -n functional-812436 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ tunnel  │ functional-812436 tunnel --alsologtostderr                                                                                │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │                     │
	│ addons  │ functional-812436 addons list                                                                                             │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	│ addons  │ functional-812436 addons list -o json                                                                                     │ functional-812436 │ jenkins │ v1.37.0 │ 27 Oct 25 22:26 UTC │ 27 Oct 25 22:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:25:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:25:32.805499 1154508 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:25:32.805635 1154508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:25:32.805639 1154508 out.go:374] Setting ErrFile to fd 2...
	I1027 22:25:32.805643 1154508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:25:32.805996 1154508 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:25:32.806435 1154508 out.go:368] Setting JSON to false
	I1027 22:25:32.807319 1154508 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18482,"bootTime":1761585451,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 22:25:32.807375 1154508 start.go:143] virtualization:  
	I1027 22:25:32.810712 1154508 out.go:179] * [functional-812436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:25:32.814529 1154508 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:25:32.814588 1154508 notify.go:221] Checking for updates...
	I1027 22:25:32.820265 1154508 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:25:32.823175 1154508 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:25:32.826121 1154508 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 22:25:32.829017 1154508 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:25:32.831835 1154508 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:25:32.835234 1154508 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:25:32.835325 1154508 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:25:32.861287 1154508 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:25:32.861394 1154508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:25:32.929557 1154508 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-27 22:25:32.920648648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:25:32.929655 1154508 docker.go:318] overlay module found
	I1027 22:25:32.932806 1154508 out.go:179] * Using the docker driver based on existing profile
	I1027 22:25:32.935554 1154508 start.go:307] selected driver: docker
	I1027 22:25:32.935572 1154508 start.go:928] validating driver "docker" against &{Name:functional-812436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-812436 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:25:32.935669 1154508 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:25:32.935772 1154508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:25:33.017202 1154508 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-27 22:25:33.006446644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:25:33.017762 1154508 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:25:33.017782 1154508 cni.go:84] Creating CNI manager for ""
	I1027 22:25:33.017835 1154508 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:25:33.017888 1154508 start.go:351] cluster config:
	{Name:functional-812436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-812436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:25:33.021213 1154508 out.go:179] * Starting "functional-812436" primary control-plane node in "functional-812436" cluster
	I1027 22:25:33.024152 1154508 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:25:33.027239 1154508 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:25:33.030201 1154508 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:25:33.030258 1154508 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 22:25:33.030300 1154508 cache.go:59] Caching tarball of preloaded images
	I1027 22:25:33.030336 1154508 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:25:33.030463 1154508 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 22:25:33.030472 1154508 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 22:25:33.030587 1154508 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/config.json ...
	I1027 22:25:33.052358 1154508 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 22:25:33.052370 1154508 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 22:25:33.052389 1154508 cache.go:233] Successfully downloaded all kic artifacts
	I1027 22:25:33.052413 1154508 start.go:360] acquireMachinesLock for functional-812436: {Name:mk44252e461151efe385aa24e7db3addab441ded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 22:25:33.052478 1154508 start.go:364] duration metric: took 48.673µs to acquireMachinesLock for "functional-812436"
	I1027 22:25:33.052498 1154508 start.go:96] Skipping create...Using existing machine configuration
	I1027 22:25:33.052502 1154508 fix.go:55] fixHost starting: 
	I1027 22:25:33.052940 1154508 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
	I1027 22:25:33.070639 1154508 fix.go:113] recreateIfNeeded on functional-812436: state=Running err=<nil>
	W1027 22:25:33.070660 1154508 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 22:25:33.073855 1154508 out.go:252] * Updating the running docker "functional-812436" container ...
	I1027 22:25:33.073883 1154508 machine.go:94] provisionDockerMachine start ...
	I1027 22:25:33.073982 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:33.092125 1154508 main.go:143] libmachine: Using SSH client type: native
	I1027 22:25:33.092460 1154508 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1027 22:25:33.092484 1154508 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 22:25:33.242082 1154508 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-812436
	
	I1027 22:25:33.242104 1154508 ubuntu.go:182] provisioning hostname "functional-812436"
	I1027 22:25:33.242168 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:33.262790 1154508 main.go:143] libmachine: Using SSH client type: native
	I1027 22:25:33.263081 1154508 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1027 22:25:33.263090 1154508 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-812436 && echo "functional-812436" | sudo tee /etc/hostname
	I1027 22:25:33.424700 1154508 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-812436
	
	I1027 22:25:33.424767 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:33.443894 1154508 main.go:143] libmachine: Using SSH client type: native
	I1027 22:25:33.444211 1154508 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1027 22:25:33.444225 1154508 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-812436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-812436/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-812436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 22:25:33.594809 1154508 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 22:25:33.594825 1154508 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 22:25:33.594853 1154508 ubuntu.go:190] setting up certificates
	I1027 22:25:33.594863 1154508 provision.go:84] configureAuth start
	I1027 22:25:33.594957 1154508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-812436
	I1027 22:25:33.613495 1154508 provision.go:143] copyHostCerts
	I1027 22:25:33.613572 1154508 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 22:25:33.613589 1154508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 22:25:33.613665 1154508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 22:25:33.613763 1154508 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 22:25:33.613767 1154508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 22:25:33.613791 1154508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 22:25:33.613839 1154508 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 22:25:33.613843 1154508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 22:25:33.613870 1154508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 22:25:33.613913 1154508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.functional-812436 san=[127.0.0.1 192.168.49.2 functional-812436 localhost minikube]
	I1027 22:25:33.737964 1154508 provision.go:177] copyRemoteCerts
	I1027 22:25:33.738015 1154508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 22:25:33.738053 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:33.760095 1154508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:25:33.866207 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 22:25:33.883670 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 22:25:33.901297 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 22:25:33.919446 1154508 provision.go:87] duration metric: took 324.560757ms to configureAuth
	I1027 22:25:33.919464 1154508 ubuntu.go:206] setting minikube options for container-runtime
	I1027 22:25:33.919664 1154508 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:25:33.919767 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:33.936808 1154508 main.go:143] libmachine: Using SSH client type: native
	I1027 22:25:33.937113 1154508 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34254 <nil> <nil>}
	I1027 22:25:33.937125 1154508 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 22:25:39.325408 1154508 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 22:25:39.325421 1154508 machine.go:97] duration metric: took 6.251529964s to provisionDockerMachine
	I1027 22:25:39.325430 1154508 start.go:293] postStartSetup for "functional-812436" (driver="docker")
	I1027 22:25:39.325440 1154508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 22:25:39.325512 1154508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 22:25:39.325559 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:39.344379 1154508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:25:39.450624 1154508 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 22:25:39.454251 1154508 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 22:25:39.454270 1154508 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 22:25:39.454280 1154508 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 22:25:39.454338 1154508 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 22:25:39.454452 1154508 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 22:25:39.454528 1154508 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/test/nested/copy/1134735/hosts -> hosts in /etc/test/nested/copy/1134735
	I1027 22:25:39.454571 1154508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1134735
	I1027 22:25:39.462337 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 22:25:39.481753 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/test/nested/copy/1134735/hosts --> /etc/test/nested/copy/1134735/hosts (40 bytes)
	I1027 22:25:39.499900 1154508 start.go:296] duration metric: took 174.455801ms for postStartSetup
	I1027 22:25:39.499974 1154508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:25:39.500014 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:39.517174 1154508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:25:39.619840 1154508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 22:25:39.625098 1154508 fix.go:57] duration metric: took 6.572587225s for fixHost
	I1027 22:25:39.625114 1154508 start.go:83] releasing machines lock for "functional-812436", held for 6.572628916s
	I1027 22:25:39.625197 1154508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-812436
	I1027 22:25:39.642543 1154508 ssh_runner.go:195] Run: cat /version.json
	I1027 22:25:39.642612 1154508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 22:25:39.642648 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:39.642667 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:39.662470 1154508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:25:39.665692 1154508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:25:39.858563 1154508 ssh_runner.go:195] Run: systemctl --version
	I1027 22:25:39.865058 1154508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 22:25:39.902508 1154508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 22:25:39.906842 1154508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 22:25:39.906899 1154508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 22:25:39.914693 1154508 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 22:25:39.914707 1154508 start.go:496] detecting cgroup driver to use...
	I1027 22:25:39.914738 1154508 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 22:25:39.914787 1154508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 22:25:39.930321 1154508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 22:25:39.943923 1154508 docker.go:218] disabling cri-docker service (if available) ...
	I1027 22:25:39.943995 1154508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 22:25:39.960172 1154508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 22:25:39.973653 1154508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 22:25:40.115655 1154508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 22:25:40.262054 1154508 docker.go:234] disabling docker service ...
	I1027 22:25:40.262111 1154508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 22:25:40.277810 1154508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 22:25:40.291048 1154508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 22:25:40.434881 1154508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 22:25:40.572129 1154508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 22:25:40.586178 1154508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 22:25:40.600056 1154508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 22:25:40.600109 1154508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:25:40.609373 1154508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 22:25:40.609432 1154508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:25:40.618648 1154508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:25:40.627714 1154508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:25:40.636459 1154508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 22:25:40.644863 1154508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:25:40.653499 1154508 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:25:40.661655 1154508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 22:25:40.670633 1154508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 22:25:40.677819 1154508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 22:25:40.684915 1154508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:25:40.815340 1154508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 22:25:41.012732 1154508 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 22:25:41.012819 1154508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 22:25:41.016602 1154508 start.go:564] Will wait 60s for crictl version
	I1027 22:25:41.016660 1154508 ssh_runner.go:195] Run: which crictl
	I1027 22:25:41.020238 1154508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 22:25:41.043868 1154508 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 22:25:41.043938 1154508 ssh_runner.go:195] Run: crio --version
	I1027 22:25:41.072417 1154508 ssh_runner.go:195] Run: crio --version
	I1027 22:25:41.102747 1154508 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 22:25:41.105690 1154508 cli_runner.go:164] Run: docker network inspect functional-812436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 22:25:41.121465 1154508 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 22:25:41.129257 1154508 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1027 22:25:41.132132 1154508 kubeadm.go:884] updating cluster {Name:functional-812436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-812436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 22:25:41.132262 1154508 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:25:41.132340 1154508 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:25:41.168274 1154508 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:25:41.168286 1154508 crio.go:433] Images already preloaded, skipping extraction
	I1027 22:25:41.168340 1154508 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 22:25:41.197019 1154508 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 22:25:41.197031 1154508 cache_images.go:86] Images are preloaded, skipping loading
	I1027 22:25:41.197038 1154508 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1027 22:25:41.197147 1154508 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-812436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-812436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 22:25:41.197239 1154508 ssh_runner.go:195] Run: crio config
	I1027 22:25:41.271132 1154508 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1027 22:25:41.271161 1154508 cni.go:84] Creating CNI manager for ""
	I1027 22:25:41.271170 1154508 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:25:41.271193 1154508 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 22:25:41.271217 1154508 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-812436 NodeName:functional-812436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 22:25:41.271555 1154508 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-812436"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 22:25:41.271637 1154508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 22:25:41.283441 1154508 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 22:25:41.283514 1154508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 22:25:41.291760 1154508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 22:25:41.305141 1154508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 22:25:41.317453 1154508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1027 22:25:41.330037 1154508 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 22:25:41.333829 1154508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:25:41.465691 1154508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:25:41.481702 1154508 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436 for IP: 192.168.49.2
	I1027 22:25:41.481714 1154508 certs.go:195] generating shared ca certs ...
	I1027 22:25:41.481730 1154508 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:25:41.481889 1154508 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 22:25:41.481925 1154508 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 22:25:41.481931 1154508 certs.go:257] generating profile certs ...
	I1027 22:25:41.482018 1154508 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.key
	I1027 22:25:41.482060 1154508 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/apiserver.key.bddfebe9
	I1027 22:25:41.482097 1154508 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/proxy-client.key
	I1027 22:25:41.482211 1154508 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 22:25:41.482256 1154508 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 22:25:41.482263 1154508 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 22:25:41.482294 1154508 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 22:25:41.482332 1154508 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 22:25:41.482359 1154508 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 22:25:41.482432 1154508 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 22:25:41.483030 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 22:25:41.502880 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 22:25:41.521985 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 22:25:41.540627 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 22:25:41.558118 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 22:25:41.575669 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 22:25:41.593231 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 22:25:41.611730 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 22:25:41.629844 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 22:25:41.648494 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 22:25:41.665370 1154508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 22:25:41.683994 1154508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 22:25:41.697037 1154508 ssh_runner.go:195] Run: openssl version
	I1027 22:25:41.703237 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 22:25:41.711936 1154508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 22:25:41.716264 1154508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 22:25:41.716322 1154508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 22:25:41.757823 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 22:25:41.766027 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 22:25:41.774558 1154508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 22:25:41.778431 1154508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 22:25:41.778495 1154508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 22:25:41.820599 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 22:25:41.828753 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 22:25:41.837669 1154508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:25:41.842308 1154508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:25:41.842365 1154508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 22:25:41.884446 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 22:25:41.892492 1154508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 22:25:41.896546 1154508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 22:25:41.938915 1154508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 22:25:41.985465 1154508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 22:25:42.027408 1154508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 22:25:42.070157 1154508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 22:25:42.113844 1154508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 22:25:42.157195 1154508 kubeadm.go:401] StartCluster: {Name:functional-812436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-812436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:25:42.157337 1154508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 22:25:42.157413 1154508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:25:42.189909 1154508 cri.go:89] found id: "ae9751844d45c98ba817dc4500919fea87baaa167407f685f4dec90f836d7451"
	I1027 22:25:42.189928 1154508 cri.go:89] found id: "2cd2de4e99f3d8f309110476d4f229e8876aa66ae33c61166acf8d8d963cd826"
	I1027 22:25:42.189932 1154508 cri.go:89] found id: "be0c2b98faacd26e74c9da5b8503216218639dd79f2dffd0ebeb0c754e7a5008"
	I1027 22:25:42.189934 1154508 cri.go:89] found id: "6c2ada7036c3ae3dc827bee6229abaa4b627d7c3321730ee4b69686ed7112341"
	I1027 22:25:42.189937 1154508 cri.go:89] found id: "25b07d3cda2e65a74f7139fc6c362557732b15bcf65ddb66dd21478b5282dccf"
	I1027 22:25:42.189940 1154508 cri.go:89] found id: "5a5f3fc6741245cb739806c5c5f1cde6bb446f2fd6b9efebb573c89d7ad2ba3c"
	I1027 22:25:42.189942 1154508 cri.go:89] found id: "e585f5277bdf4192dd4a62d024d85e653f382a4ce3cf90090b32b507155228cb"
	I1027 22:25:42.189945 1154508 cri.go:89] found id: "1f82ea9ab0da84639bb7f8e732f5c9a3ca84e4aa17e4957019f38f9cdc40e5ae"
	I1027 22:25:42.189947 1154508 cri.go:89] found id: "010b487b3ec3d86f4a1f746b5147784720d119a3937d923a897794c33ddc5216"
	I1027 22:25:42.189955 1154508 cri.go:89] found id: "3a3fa45e47b52ed69d9f387df295ae9cf748578c5cbbe9b25612bc8bf8e72be9"
	I1027 22:25:42.189975 1154508 cri.go:89] found id: "c2e5fc13925c4bed34fa3bb98275df70a5e7d91fc4e4a060ccd809820b19a739"
	I1027 22:25:42.189977 1154508 cri.go:89] found id: "c79874ce1032594099fdb0f353658cc317a7cd9e9d58c07ee2ef454d40dd9ce4"
	I1027 22:25:42.189979 1154508 cri.go:89] found id: "b98956887f2ac81add0b995c3dc7102f4de0d8147446c2e59793e3d3e0ded7ce"
	I1027 22:25:42.189983 1154508 cri.go:89] found id: "50cb33251686052a0bf356d899d70ecebf6c998dd3008914447ac7614356a98b"
	I1027 22:25:42.189985 1154508 cri.go:89] found id: "a9aee1a286325dbe0876e3e1b05badffee9fba2e681476fa06a1571f35913d02"
	I1027 22:25:42.189989 1154508 cri.go:89] found id: "73ee1ede381961da14708ac4755c68d39977c5a97413340505c9e07dfbba55bf"
	I1027 22:25:42.189991 1154508 cri.go:89] found id: ""
	I1027 22:25:42.190054 1154508 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 22:25:42.204276 1154508 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:25:42Z" level=error msg="open /run/runc: no such file or directory"
	I1027 22:25:42.204365 1154508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 22:25:42.214622 1154508 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 22:25:42.214633 1154508 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 22:25:42.214687 1154508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 22:25:42.227443 1154508 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:25:42.228007 1154508 kubeconfig.go:125] found "functional-812436" server: "https://192.168.49.2:8441"
	I1027 22:25:42.229538 1154508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 22:25:42.240610 1154508 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-27 22:23:55.721454914 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-27 22:25:41.326356991 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1027 22:25:42.240619 1154508 kubeadm.go:1161] stopping kube-system containers ...
	I1027 22:25:42.240631 1154508 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1027 22:25:42.240687 1154508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 22:25:42.303019 1154508 cri.go:89] found id: "ae9751844d45c98ba817dc4500919fea87baaa167407f685f4dec90f836d7451"
	I1027 22:25:42.303030 1154508 cri.go:89] found id: "2cd2de4e99f3d8f309110476d4f229e8876aa66ae33c61166acf8d8d963cd826"
	I1027 22:25:42.303034 1154508 cri.go:89] found id: "be0c2b98faacd26e74c9da5b8503216218639dd79f2dffd0ebeb0c754e7a5008"
	I1027 22:25:42.303036 1154508 cri.go:89] found id: "6c2ada7036c3ae3dc827bee6229abaa4b627d7c3321730ee4b69686ed7112341"
	I1027 22:25:42.303039 1154508 cri.go:89] found id: "25b07d3cda2e65a74f7139fc6c362557732b15bcf65ddb66dd21478b5282dccf"
	I1027 22:25:42.303042 1154508 cri.go:89] found id: "5a5f3fc6741245cb739806c5c5f1cde6bb446f2fd6b9efebb573c89d7ad2ba3c"
	I1027 22:25:42.303044 1154508 cri.go:89] found id: "e585f5277bdf4192dd4a62d024d85e653f382a4ce3cf90090b32b507155228cb"
	I1027 22:25:42.303047 1154508 cri.go:89] found id: "1f82ea9ab0da84639bb7f8e732f5c9a3ca84e4aa17e4957019f38f9cdc40e5ae"
	I1027 22:25:42.303049 1154508 cri.go:89] found id: "c79874ce1032594099fdb0f353658cc317a7cd9e9d58c07ee2ef454d40dd9ce4"
	I1027 22:25:42.303056 1154508 cri.go:89] found id: "b98956887f2ac81add0b995c3dc7102f4de0d8147446c2e59793e3d3e0ded7ce"
	I1027 22:25:42.303070 1154508 cri.go:89] found id: "50cb33251686052a0bf356d899d70ecebf6c998dd3008914447ac7614356a98b"
	I1027 22:25:42.303073 1154508 cri.go:89] found id: "a9aee1a286325dbe0876e3e1b05badffee9fba2e681476fa06a1571f35913d02"
	I1027 22:25:42.303075 1154508 cri.go:89] found id: "73ee1ede381961da14708ac4755c68d39977c5a97413340505c9e07dfbba55bf"
	I1027 22:25:42.303077 1154508 cri.go:89] found id: ""
	I1027 22:25:42.303088 1154508 cri.go:252] Stopping containers: [ae9751844d45c98ba817dc4500919fea87baaa167407f685f4dec90f836d7451 2cd2de4e99f3d8f309110476d4f229e8876aa66ae33c61166acf8d8d963cd826 be0c2b98faacd26e74c9da5b8503216218639dd79f2dffd0ebeb0c754e7a5008 6c2ada7036c3ae3dc827bee6229abaa4b627d7c3321730ee4b69686ed7112341 25b07d3cda2e65a74f7139fc6c362557732b15bcf65ddb66dd21478b5282dccf 5a5f3fc6741245cb739806c5c5f1cde6bb446f2fd6b9efebb573c89d7ad2ba3c e585f5277bdf4192dd4a62d024d85e653f382a4ce3cf90090b32b507155228cb 1f82ea9ab0da84639bb7f8e732f5c9a3ca84e4aa17e4957019f38f9cdc40e5ae c79874ce1032594099fdb0f353658cc317a7cd9e9d58c07ee2ef454d40dd9ce4 b98956887f2ac81add0b995c3dc7102f4de0d8147446c2e59793e3d3e0ded7ce 50cb33251686052a0bf356d899d70ecebf6c998dd3008914447ac7614356a98b a9aee1a286325dbe0876e3e1b05badffee9fba2e681476fa06a1571f35913d02 73ee1ede381961da14708ac4755c68d39977c5a97413340505c9e07dfbba55bf]
	I1027 22:25:42.303162 1154508 ssh_runner.go:195] Run: which crictl
	I1027 22:25:42.310767 1154508 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 ae9751844d45c98ba817dc4500919fea87baaa167407f685f4dec90f836d7451 2cd2de4e99f3d8f309110476d4f229e8876aa66ae33c61166acf8d8d963cd826 be0c2b98faacd26e74c9da5b8503216218639dd79f2dffd0ebeb0c754e7a5008 6c2ada7036c3ae3dc827bee6229abaa4b627d7c3321730ee4b69686ed7112341 25b07d3cda2e65a74f7139fc6c362557732b15bcf65ddb66dd21478b5282dccf 5a5f3fc6741245cb739806c5c5f1cde6bb446f2fd6b9efebb573c89d7ad2ba3c e585f5277bdf4192dd4a62d024d85e653f382a4ce3cf90090b32b507155228cb 1f82ea9ab0da84639bb7f8e732f5c9a3ca84e4aa17e4957019f38f9cdc40e5ae c79874ce1032594099fdb0f353658cc317a7cd9e9d58c07ee2ef454d40dd9ce4 b98956887f2ac81add0b995c3dc7102f4de0d8147446c2e59793e3d3e0ded7ce 50cb33251686052a0bf356d899d70ecebf6c998dd3008914447ac7614356a98b a9aee1a286325dbe0876e3e1b05badffee9fba2e681476fa06a1571f35913d02 73ee1ede381961da14708ac4755c68d39977c5a97413340505c9e07dfbba55bf
	I1027 22:25:42.390877 1154508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1027 22:25:42.497257 1154508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 22:25:42.505071 1154508 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Oct 27 22:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 27 22:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 27 22:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 27 22:24 /etc/kubernetes/scheduler.conf
	
	I1027 22:25:42.505126 1154508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1027 22:25:42.512899 1154508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1027 22:25:42.520772 1154508 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:25:42.520828 1154508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 22:25:42.528025 1154508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1027 22:25:42.535367 1154508 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:25:42.535429 1154508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 22:25:42.542888 1154508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1027 22:25:42.550463 1154508 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1027 22:25:42.550518 1154508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 22:25:42.557834 1154508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 22:25:42.565586 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:25:42.615012 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:25:44.581777 1154508 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.966740481s)
	I1027 22:25:44.581834 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:25:44.806136 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:25:44.869441 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:25:44.923643 1154508 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:25:44.923709 1154508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:25:45.423923 1154508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:25:45.923777 1154508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:25:45.939851 1154508 api_server.go:72] duration metric: took 1.016217951s to wait for apiserver process to appear ...
	I1027 22:25:45.939867 1154508 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:25:45.939887 1154508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1027 22:25:50.400851 1154508 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:25:50.400877 1154508 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:25:50.400894 1154508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1027 22:25:50.414366 1154508 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:25:50.414449 1154508 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:25:50.440628 1154508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1027 22:25:50.469441 1154508 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 22:25:50.469458 1154508 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 22:25:50.939980 1154508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1027 22:25:50.956138 1154508 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:25:50.956159 1154508 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:25:51.440805 1154508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1027 22:25:51.449669 1154508 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 22:25:51.449686 1154508 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 22:25:51.940031 1154508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1027 22:25:51.948384 1154508 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1027 22:25:51.962330 1154508 api_server.go:141] control plane version: v1.34.1
	I1027 22:25:51.962350 1154508 api_server.go:131] duration metric: took 6.022477585s to wait for apiserver health ...
	I1027 22:25:51.962358 1154508 cni.go:84] Creating CNI manager for ""
	I1027 22:25:51.962363 1154508 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:25:51.965819 1154508 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 22:25:51.968703 1154508 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 22:25:51.972841 1154508 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 22:25:51.972851 1154508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 22:25:51.985357 1154508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 22:25:52.468004 1154508 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:25:52.471568 1154508 system_pods.go:59] 8 kube-system pods found
	I1027 22:25:52.471600 1154508 system_pods.go:61] "coredns-66bc5c9577-jd7sv" [4ea648ba-3487-4f7a-bcaa-2eadd19e24f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:25:52.471607 1154508 system_pods.go:61] "etcd-functional-812436" [979b2cd5-60a7-4b94-b490-a9d25af04c7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:25:52.471612 1154508 system_pods.go:61] "kindnet-fs9vc" [d9c439ae-bc9f-43e3-a9a3-55695f932487] Running
	I1027 22:25:52.471618 1154508 system_pods.go:61] "kube-apiserver-functional-812436" [95e238c9-1493-43ed-8ded-c18c78886197] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:25:52.471626 1154508 system_pods.go:61] "kube-controller-manager-functional-812436" [7a59366e-a495-45ea-9b7b-23d568d3e6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:25:52.471630 1154508 system_pods.go:61] "kube-proxy-dq5sk" [38d6bff6-d15a-43f3-be43-6b5ab9715919] Running
	I1027 22:25:52.471635 1154508 system_pods.go:61] "kube-scheduler-functional-812436" [aa80e143-b54e-44a7-b46c-3c0f8b581138] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:25:52.471638 1154508 system_pods.go:61] "storage-provisioner" [1ceb2728-25f7-4317-81bd-4d43cdbeecad] Running
	I1027 22:25:52.471643 1154508 system_pods.go:74] duration metric: took 3.614057ms to wait for pod list to return data ...
	I1027 22:25:52.471649 1154508 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:25:52.474824 1154508 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 22:25:52.474847 1154508 node_conditions.go:123] node cpu capacity is 2
	I1027 22:25:52.474858 1154508 node_conditions.go:105] duration metric: took 3.205029ms to run NodePressure ...
	I1027 22:25:52.474928 1154508 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 22:25:52.742500 1154508 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1027 22:25:52.745563 1154508 kubeadm.go:744] kubelet initialised
	I1027 22:25:52.745573 1154508 kubeadm.go:745] duration metric: took 3.060675ms waiting for restarted kubelet to initialise ...
	I1027 22:25:52.745586 1154508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 22:25:52.754770 1154508 ops.go:34] apiserver oom_adj: -16
	I1027 22:25:52.754780 1154508 kubeadm.go:602] duration metric: took 10.540142599s to restartPrimaryControlPlane
	I1027 22:25:52.754787 1154508 kubeadm.go:403] duration metric: took 10.597604751s to StartCluster
	I1027 22:25:52.754802 1154508 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:25:52.754872 1154508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:25:52.755568 1154508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 22:25:52.755865 1154508 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 22:25:52.756075 1154508 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:25:52.756126 1154508 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 22:25:52.756460 1154508 addons.go:69] Setting default-storageclass=true in profile "functional-812436"
	I1027 22:25:52.756461 1154508 addons.go:69] Setting storage-provisioner=true in profile "functional-812436"
	I1027 22:25:52.756476 1154508 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-812436"
	I1027 22:25:52.756479 1154508 addons.go:238] Setting addon storage-provisioner=true in "functional-812436"
	W1027 22:25:52.756485 1154508 addons.go:247] addon storage-provisioner should already be in state true
	I1027 22:25:52.756511 1154508 host.go:66] Checking if "functional-812436" exists ...
	I1027 22:25:52.756841 1154508 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
	I1027 22:25:52.756956 1154508 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
	I1027 22:25:52.759644 1154508 out.go:179] * Verifying Kubernetes components...
	I1027 22:25:52.762754 1154508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 22:25:52.794515 1154508 addons.go:238] Setting addon default-storageclass=true in "functional-812436"
	W1027 22:25:52.794526 1154508 addons.go:247] addon default-storageclass should already be in state true
	I1027 22:25:52.794549 1154508 host.go:66] Checking if "functional-812436" exists ...
	I1027 22:25:52.794995 1154508 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
	I1027 22:25:52.795330 1154508 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 22:25:52.798216 1154508 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:25:52.798225 1154508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 22:25:52.798284 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:52.827093 1154508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:25:52.827955 1154508 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 22:25:52.827973 1154508 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 22:25:52.828031 1154508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:25:52.857082 1154508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:25:52.966996 1154508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 22:25:53.023480 1154508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 22:25:53.026810 1154508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 22:25:53.751918 1154508 node_ready.go:35] waiting up to 6m0s for node "functional-812436" to be "Ready" ...
	I1027 22:25:53.756454 1154508 node_ready.go:49] node "functional-812436" is "Ready"
	I1027 22:25:53.756470 1154508 node_ready.go:38] duration metric: took 4.535123ms for node "functional-812436" to be "Ready" ...
	I1027 22:25:53.756482 1154508 api_server.go:52] waiting for apiserver process to appear ...
	I1027 22:25:53.756543 1154508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:25:53.775431 1154508 api_server.go:72] duration metric: took 1.019539264s to wait for apiserver process to appear ...
	I1027 22:25:53.775445 1154508 api_server.go:88] waiting for apiserver healthz status ...
	I1027 22:25:53.775463 1154508 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1027 22:25:53.794838 1154508 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 22:25:53.797783 1154508 addons.go:514] duration metric: took 1.041636978s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 22:25:53.819291 1154508 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1027 22:25:53.820916 1154508 api_server.go:141] control plane version: v1.34.1
	I1027 22:25:53.820931 1154508 api_server.go:131] duration metric: took 45.480435ms to wait for apiserver health ...
	I1027 22:25:53.820938 1154508 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 22:25:53.838186 1154508 system_pods.go:59] 8 kube-system pods found
	I1027 22:25:53.838216 1154508 system_pods.go:61] "coredns-66bc5c9577-jd7sv" [4ea648ba-3487-4f7a-bcaa-2eadd19e24f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:25:53.838224 1154508 system_pods.go:61] "etcd-functional-812436" [979b2cd5-60a7-4b94-b490-a9d25af04c7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:25:53.838231 1154508 system_pods.go:61] "kindnet-fs9vc" [d9c439ae-bc9f-43e3-a9a3-55695f932487] Running
	I1027 22:25:53.838237 1154508 system_pods.go:61] "kube-apiserver-functional-812436" [95e238c9-1493-43ed-8ded-c18c78886197] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:25:53.838244 1154508 system_pods.go:61] "kube-controller-manager-functional-812436" [7a59366e-a495-45ea-9b7b-23d568d3e6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:25:53.838248 1154508 system_pods.go:61] "kube-proxy-dq5sk" [38d6bff6-d15a-43f3-be43-6b5ab9715919] Running
	I1027 22:25:53.838254 1154508 system_pods.go:61] "kube-scheduler-functional-812436" [aa80e143-b54e-44a7-b46c-3c0f8b581138] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:25:53.838257 1154508 system_pods.go:61] "storage-provisioner" [1ceb2728-25f7-4317-81bd-4d43cdbeecad] Running
	I1027 22:25:53.838262 1154508 system_pods.go:74] duration metric: took 17.320177ms to wait for pod list to return data ...
	I1027 22:25:53.838270 1154508 default_sa.go:34] waiting for default service account to be created ...
	I1027 22:25:53.844802 1154508 default_sa.go:45] found service account: "default"
	I1027 22:25:53.844823 1154508 default_sa.go:55] duration metric: took 6.545599ms for default service account to be created ...
	I1027 22:25:53.844831 1154508 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 22:25:53.848828 1154508 system_pods.go:86] 8 kube-system pods found
	I1027 22:25:53.848846 1154508 system_pods.go:89] "coredns-66bc5c9577-jd7sv" [4ea648ba-3487-4f7a-bcaa-2eadd19e24f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 22:25:53.848854 1154508 system_pods.go:89] "etcd-functional-812436" [979b2cd5-60a7-4b94-b490-a9d25af04c7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 22:25:53.848858 1154508 system_pods.go:89] "kindnet-fs9vc" [d9c439ae-bc9f-43e3-a9a3-55695f932487] Running
	I1027 22:25:53.848863 1154508 system_pods.go:89] "kube-apiserver-functional-812436" [95e238c9-1493-43ed-8ded-c18c78886197] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 22:25:53.848879 1154508 system_pods.go:89] "kube-controller-manager-functional-812436" [7a59366e-a495-45ea-9b7b-23d568d3e6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 22:25:53.848883 1154508 system_pods.go:89] "kube-proxy-dq5sk" [38d6bff6-d15a-43f3-be43-6b5ab9715919] Running
	I1027 22:25:53.848888 1154508 system_pods.go:89] "kube-scheduler-functional-812436" [aa80e143-b54e-44a7-b46c-3c0f8b581138] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 22:25:53.848891 1154508 system_pods.go:89] "storage-provisioner" [1ceb2728-25f7-4317-81bd-4d43cdbeecad] Running
	I1027 22:25:53.848897 1154508 system_pods.go:126] duration metric: took 4.06106ms to wait for k8s-apps to be running ...
	I1027 22:25:53.848903 1154508 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 22:25:53.848970 1154508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:25:53.863272 1154508 system_svc.go:56] duration metric: took 14.357037ms WaitForService to wait for kubelet
	I1027 22:25:53.863290 1154508 kubeadm.go:587] duration metric: took 1.107403634s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 22:25:53.863321 1154508 node_conditions.go:102] verifying NodePressure condition ...
	I1027 22:25:53.865881 1154508 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 22:25:53.865898 1154508 node_conditions.go:123] node cpu capacity is 2
	I1027 22:25:53.865908 1154508 node_conditions.go:105] duration metric: took 2.582657ms to run NodePressure ...
	I1027 22:25:53.865919 1154508 start.go:242] waiting for startup goroutines ...
	I1027 22:25:53.865925 1154508 start.go:247] waiting for cluster config update ...
	I1027 22:25:53.865934 1154508 start.go:256] writing updated cluster config ...
	I1027 22:25:53.866290 1154508 ssh_runner.go:195] Run: rm -f paused
	I1027 22:25:53.869906 1154508 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:25:53.873999 1154508 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jd7sv" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 22:25:55.880211 1154508 pod_ready.go:104] pod "coredns-66bc5c9577-jd7sv" is not "Ready", error: <nil>
	I1027 22:25:57.381632 1154508 pod_ready.go:94] pod "coredns-66bc5c9577-jd7sv" is "Ready"
	I1027 22:25:57.381647 1154508 pod_ready.go:86] duration metric: took 3.507635051s for pod "coredns-66bc5c9577-jd7sv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:25:57.384616 1154508 pod_ready.go:83] waiting for pod "etcd-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:25:59.390074 1154508 pod_ready.go:94] pod "etcd-functional-812436" is "Ready"
	I1027 22:25:59.390088 1154508 pod_ready.go:86] duration metric: took 2.005460055s for pod "etcd-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:25:59.392525 1154508 pod_ready.go:83] waiting for pod "kube-apiserver-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:25:59.397386 1154508 pod_ready.go:94] pod "kube-apiserver-functional-812436" is "Ready"
	I1027 22:25:59.397400 1154508 pod_ready.go:86] duration metric: took 4.863084ms for pod "kube-apiserver-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:25:59.404200 1154508 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 22:26:01.409425 1154508 pod_ready.go:104] pod "kube-controller-manager-functional-812436" is not "Ready", error: <nil>
	W1027 22:26:03.409819 1154508 pod_ready.go:104] pod "kube-controller-manager-functional-812436" is not "Ready", error: <nil>
	I1027 22:26:04.912332 1154508 pod_ready.go:94] pod "kube-controller-manager-functional-812436" is "Ready"
	I1027 22:26:04.912347 1154508 pod_ready.go:86] duration metric: took 5.508133772s for pod "kube-controller-manager-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:26:04.914945 1154508 pod_ready.go:83] waiting for pod "kube-proxy-dq5sk" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:26:04.919484 1154508 pod_ready.go:94] pod "kube-proxy-dq5sk" is "Ready"
	I1027 22:26:04.919498 1154508 pod_ready.go:86] duration metric: took 4.539455ms for pod "kube-proxy-dq5sk" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:26:04.921775 1154508 pod_ready.go:83] waiting for pod "kube-scheduler-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:26:04.926667 1154508 pod_ready.go:94] pod "kube-scheduler-functional-812436" is "Ready"
	I1027 22:26:04.926681 1154508 pod_ready.go:86] duration metric: took 4.893608ms for pod "kube-scheduler-functional-812436" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 22:26:04.926692 1154508 pod_ready.go:40] duration metric: took 11.056756749s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 22:26:04.982342 1154508 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 22:26:04.985341 1154508 out.go:179] * Done! kubectl is now configured to use "functional-812436" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 22:26:40 functional-812436 crio[3505]: time="2025-10-27T22:26:40.980322596Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-fxp5d Namespace:default ID:d175bbcdbe7612b9e94fa8804648c16d9c59cc78e68b5b0b962df257f8c1ff08 UID:56194524-a1a3-4553-9473-247177d28d76 NetNS:/var/run/netns/a3738e5d-0d9e-457f-ad26-3b95e468f537 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d780}] Aliases:map[]}"
	Oct 27 22:26:40 functional-812436 crio[3505]: time="2025-10-27T22:26:40.980481793Z" level=info msg="Checking pod default_hello-node-75c85bcc94-fxp5d for CNI network kindnet (type=ptp)"
	Oct 27 22:26:40 functional-812436 crio[3505]: time="2025-10-27T22:26:40.983146052Z" level=info msg="Ran pod sandbox d175bbcdbe7612b9e94fa8804648c16d9c59cc78e68b5b0b962df257f8c1ff08 with infra container: default/hello-node-75c85bcc94-fxp5d/POD" id=43703e17-05fc-4e85-98f5-72e8e33ac1df name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 22:26:40 functional-812436 crio[3505]: time="2025-10-27T22:26:40.988092708Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=902a5b1d-1a6d-47cb-bfd7-2fb15b0db4b2 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.924737782Z" level=info msg="Stopping pod sandbox: d64171c71e5612574c0a11e38398c6c041195a15b70a425531305f9e849b80e1" id=18888e9e-3752-4ae2-bbd7-9a7b68a7b622 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.924805557Z" level=info msg="Stopped pod sandbox (already stopped): d64171c71e5612574c0a11e38398c6c041195a15b70a425531305f9e849b80e1" id=18888e9e-3752-4ae2-bbd7-9a7b68a7b622 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.92562396Z" level=info msg="Removing pod sandbox: d64171c71e5612574c0a11e38398c6c041195a15b70a425531305f9e849b80e1" id=0bdfd9fd-d2a2-4cc5-8f07-891a94950d77 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.92927276Z" level=info msg="Removed pod sandbox: d64171c71e5612574c0a11e38398c6c041195a15b70a425531305f9e849b80e1" id=0bdfd9fd-d2a2-4cc5-8f07-891a94950d77 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.92989647Z" level=info msg="Stopping pod sandbox: 5b3093d731c271872dcbf3b4856a276ec100255749a372e6db91a161b502577a" id=d2f77595-f706-4b41-b27e-162ad57c21e8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.92994264Z" level=info msg="Stopped pod sandbox (already stopped): 5b3093d731c271872dcbf3b4856a276ec100255749a372e6db91a161b502577a" id=d2f77595-f706-4b41-b27e-162ad57c21e8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.93036337Z" level=info msg="Removing pod sandbox: 5b3093d731c271872dcbf3b4856a276ec100255749a372e6db91a161b502577a" id=0a285492-bddd-4cf1-a162-b6f19545c14f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.934063346Z" level=info msg="Removed pod sandbox: 5b3093d731c271872dcbf3b4856a276ec100255749a372e6db91a161b502577a" id=0a285492-bddd-4cf1-a162-b6f19545c14f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.935107153Z" level=info msg="Stopping pod sandbox: e01e4888309171cbd573bbfc536f23038206b51b2b231f9500f8c52209d3cfb4" id=db2305f1-6b48-40b8-b8b7-72a3c035ce84 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.935326535Z" level=info msg="Stopped pod sandbox (already stopped): e01e4888309171cbd573bbfc536f23038206b51b2b231f9500f8c52209d3cfb4" id=db2305f1-6b48-40b8-b8b7-72a3c035ce84 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.935800377Z" level=info msg="Removing pod sandbox: e01e4888309171cbd573bbfc536f23038206b51b2b231f9500f8c52209d3cfb4" id=493f454d-6b3c-45d0-ad51-e6ac36cef774 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:26:44 functional-812436 crio[3505]: time="2025-10-27T22:26:44.939753475Z" level=info msg="Removed pod sandbox: e01e4888309171cbd573bbfc536f23038206b51b2b231f9500f8c52209d3cfb4" id=493f454d-6b3c-45d0-ad51-e6ac36cef774 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 22:26:54 functional-812436 crio[3505]: time="2025-10-27T22:26:54.945566312Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7d27cb3f-9898-41f8-ad8f-6adc04a45bf3 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:27:07 functional-812436 crio[3505]: time="2025-10-27T22:27:07.944006093Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=551176bd-ecd5-4214-8427-7281b1ffac9f name=/runtime.v1.ImageService/PullImage
	Oct 27 22:27:20 functional-812436 crio[3505]: time="2025-10-27T22:27:20.945559676Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b7bd82ac-9009-4e88-99e4-7eb1672366fc name=/runtime.v1.ImageService/PullImage
	Oct 27 22:27:58 functional-812436 crio[3505]: time="2025-10-27T22:27:58.945087676Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8e9f9142-aa88-48e7-af87-9fe722b6f456 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:28:04 functional-812436 crio[3505]: time="2025-10-27T22:28:04.945180704Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=23b1840e-2716-4634-976b-20f10f44f2dd name=/runtime.v1.ImageService/PullImage
	Oct 27 22:29:22 functional-812436 crio[3505]: time="2025-10-27T22:29:22.946239798Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f1eeda13-277c-4917-8ecf-645633bb2797 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:29:26 functional-812436 crio[3505]: time="2025-10-27T22:29:26.944587541Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=95d364de-cc52-4cee-8bd9-caa4919a958d name=/runtime.v1.ImageService/PullImage
	Oct 27 22:32:05 functional-812436 crio[3505]: time="2025-10-27T22:32:05.944490595Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7d4356f0-a59c-4a7d-b74a-b2957196b705 name=/runtime.v1.ImageService/PullImage
	Oct 27 22:32:16 functional-812436 crio[3505]: time="2025-10-27T22:32:16.944201094Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cdeee7e1-c61d-462e-9a6e-377f55ad0471 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	09fe838708655       docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f   9 minutes ago       Running             myfrontend                0                   8e683d3304020       sp-pod                                      default
	9a602896c3397       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   c08f3d3e87226       nginx-svc                                   default
	e36e96d3b1ee2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   18e24085583ff       coredns-66bc5c9577-jd7sv                    kube-system
	766a869e53d43       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   60fcce74a66c6       kube-proxy-dq5sk                            kube-system
	caf56290ab1d9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   d2efa0eb62765       storage-provisioner                         kube-system
	253a412d06481       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   c74704e5b0e6b       kindnet-fs9vc                               kube-system
	8bdcf4ae9c763       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   6d85b50cc9793       kube-apiserver-functional-812436            kube-system
	1fc8b9e27699e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   403463d7419d0       kube-scheduler-functional-812436            kube-system
	286b45ff84427       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   122f3c987755e       kube-controller-manager-functional-812436   kube-system
	330d96c6bbc90       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   ce11e27789577       etcd-functional-812436                      kube-system
	2cd2de4e99f3d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   18e24085583ff       coredns-66bc5c9577-jd7sv                    kube-system
	be0c2b98faacd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   ce11e27789577       etcd-functional-812436                      kube-system
	6c2ada7036c3a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   122f3c987755e       kube-controller-manager-functional-812436   kube-system
	25b07d3cda2e6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   d2efa0eb62765       storage-provisioner                         kube-system
	5a5f3fc674124       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   c74704e5b0e6b       kindnet-fs9vc                               kube-system
	e585f5277bdf4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   60fcce74a66c6       kube-proxy-dq5sk                            kube-system
	1f82ea9ab0da8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   403463d7419d0       kube-scheduler-functional-812436            kube-system
	
	
	==> coredns [2cd2de4e99f3d8f309110476d4f229e8876aa66ae33c61166acf8d8d963cd826] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58060 - 22984 "HINFO IN 5104727503791454796.6653798171079065634. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033798118s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e36e96d3b1ee29b504d269bb0637ba6759b49903e8f66275f5032769ca361b10] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53980 - 20979 "HINFO IN 4765102480075105744.5528972917298248679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032837576s
	
	
	==> describe nodes <==
	Name:               functional-812436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-812436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=functional-812436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T22_24_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 22:24:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-812436
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 22:36:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 22:35:01 +0000   Mon, 27 Oct 2025 22:24:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 22:35:01 +0000   Mon, 27 Oct 2025 22:24:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 22:35:01 +0000   Mon, 27 Oct 2025 22:24:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 22:35:01 +0000   Mon, 27 Oct 2025 22:24:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-812436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                22e6eeb8-d427-4843-b384-7ebfba682545
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fxp5d                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-m6t8s          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-jd7sv                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-812436                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-fs9vc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-812436             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-812436    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dq5sk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-812436             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-812436 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-812436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-812436 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-812436 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-812436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-812436 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-812436 event: Registered Node functional-812436 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-812436 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-812436 event: Registered Node functional-812436 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-812436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-812436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-812436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-812436 event: Registered Node functional-812436 in Controller
	
	
	==> dmesg <==
	[Oct27 20:57] overlayfs: idmapped layers are currently not supported
	[Oct27 20:58] overlayfs: idmapped layers are currently not supported
	[ +22.437501] overlayfs: idmapped layers are currently not supported
	[Oct27 20:59] overlayfs: idmapped layers are currently not supported
	[Oct27 21:00] overlayfs: idmapped layers are currently not supported
	[Oct27 21:01] overlayfs: idmapped layers are currently not supported
	[Oct27 21:02] overlayfs: idmapped layers are currently not supported
	[Oct27 21:03] overlayfs: idmapped layers are currently not supported
	[ +50.457876] overlayfs: idmapped layers are currently not supported
	[Oct27 21:04] overlayfs: idmapped layers are currently not supported
	[Oct27 21:05] overlayfs: idmapped layers are currently not supported
	[ +28.375154] overlayfs: idmapped layers are currently not supported
	[Oct27 21:06] overlayfs: idmapped layers are currently not supported
	[ +27.785336] overlayfs: idmapped layers are currently not supported
	[Oct27 21:07] overlayfs: idmapped layers are currently not supported
	[Oct27 21:08] overlayfs: idmapped layers are currently not supported
	[Oct27 21:09] overlayfs: idmapped layers are currently not supported
	[Oct27 21:10] overlayfs: idmapped layers are currently not supported
	[Oct27 21:11] overlayfs: idmapped layers are currently not supported
	[Oct27 21:12] overlayfs: idmapped layers are currently not supported
	[Oct27 21:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 22:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 22:17] overlayfs: idmapped layers are currently not supported
	[Oct27 22:23] overlayfs: idmapped layers are currently not supported
	[Oct27 22:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [330d96c6bbc9004e7a05ea47fd33082bd3e345cbdde86416605c45d8dc02decb] <==
	{"level":"warn","ts":"2025-10-27T22:25:48.921832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:48.953976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:48.980447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.009588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.078020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.105355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.126462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.170986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.198926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.218753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.244080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.283846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.316626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.337869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.371645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.389716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.420938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.453097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.488307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.510630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.547528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:49.631922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35362","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:35:47.694625Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1123}
	{"level":"info","ts":"2025-10-27T22:35:47.717744Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1123,"took":"22.749805ms","hash":2154760754,"current-db-size-bytes":3330048,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1449984,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-27T22:35:47.717798Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2154760754,"revision":1123,"compact-revision":-1}
	
	
	==> etcd [be0c2b98faacd26e74c9da5b8503216218639dd79f2dffd0ebeb0c754e7a5008] <==
	{"level":"warn","ts":"2025-10-27T22:25:14.937385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:14.973456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:14.996592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:15.038151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:15.067744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:15.098796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T22:25:15.223908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52704","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T22:25:34.111165Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T22:25:34.111233Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-812436","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-27T22:25:34.111344Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:25:34.249592Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T22:25:34.249684Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:25:34.249706Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-27T22:25:34.249777Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T22:25:34.249842Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-27T22:25:34.249823Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T22:25:34.249861Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:25:34.249869Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T22:25:34.249903Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T22:25:34.249911Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T22:25:34.249918Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:25:34.253762Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-27T22:25:34.253852Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T22:25:34.253898Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-27T22:25:34.253925Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-812436","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:36:26 up  5:18,  0 user,  load average: 0.23, 0.46, 1.70
	Linux functional-812436 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [253a412d06481aaa297a84d614d85ae02db4a4620f4dbc14cc2b5e198ddb226f] <==
	I1027 22:34:21.625785       1 main.go:301] handling current node
	I1027 22:34:31.622740       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:34:31.622854       1 main.go:301] handling current node
	I1027 22:34:41.617328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:34:41.617446       1 main.go:301] handling current node
	I1027 22:34:51.620832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:34:51.620937       1 main.go:301] handling current node
	I1027 22:35:01.617640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:35:01.617774       1 main.go:301] handling current node
	I1027 22:35:11.617624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:35:11.617659       1 main.go:301] handling current node
	I1027 22:35:21.623706       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:35:21.623743       1 main.go:301] handling current node
	I1027 22:35:31.622493       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:35:31.622533       1 main.go:301] handling current node
	I1027 22:35:41.617200       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:35:41.617238       1 main.go:301] handling current node
	I1027 22:35:51.618885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:35:51.618922       1 main.go:301] handling current node
	I1027 22:36:01.618881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:36:01.619036       1 main.go:301] handling current node
	I1027 22:36:11.616983       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:36:11.617017       1 main.go:301] handling current node
	I1027 22:36:21.622258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:36:21.622372       1 main.go:301] handling current node
	
	
	==> kindnet [5a5f3fc6741245cb739806c5c5f1cde6bb446f2fd6b9efebb573c89d7ad2ba3c] <==
	I1027 22:25:10.231273       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 22:25:10.237512       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1027 22:25:10.237648       1 main.go:148] setting mtu 1500 for CNI 
	I1027 22:25:10.237660       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 22:25:10.237672       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T22:25:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 22:25:10.557356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 22:25:10.558439       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 22:25:10.558560       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 22:25:10.559378       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 22:25:10.560477       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 22:25:10.561331       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 22:25:10.561540       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 22:25:10.561660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1027 22:25:16.864383       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 22:25:16.864411       1 metrics.go:72] Registering metrics
	I1027 22:25:16.864460       1 controller.go:711] "Syncing nftables rules"
	I1027 22:25:20.556934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:25:20.557031       1 main.go:301] handling current node
	I1027 22:25:30.558481       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 22:25:30.558542       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8bdcf4ae9c763ebb47f0fe4894722781d450606a50c00ff44410e95acd593fd3] <==
	I1027 22:25:50.623748       1 aggregator.go:171] initial CRD sync complete...
	I1027 22:25:50.623765       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 22:25:50.623771       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 22:25:50.623777       1 cache.go:39] Caches are synced for autoregister controller
	I1027 22:25:50.623916       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 22:25:50.630941       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1027 22:25:50.639835       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 22:25:50.668055       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 22:25:50.684238       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 22:25:50.684343       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 22:25:50.969112       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 22:25:51.310660       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 22:25:52.461060       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 22:25:52.605820       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 22:25:52.675447       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 22:25:52.688617       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 22:25:53.940046       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 22:25:54.137653       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 22:25:54.237428       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 22:26:08.298954       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.237.84"}
	I1027 22:26:14.465389       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.15.240"}
	I1027 22:26:24.173532       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.9.64"}
	E1027 22:26:33.106610       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41060: use of closed network connection
	I1027 22:26:40.730786       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.227.32"}
	I1027 22:35:50.556928       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [286b45ff84427e9d03ab93fd808fcb83148976be02151c40870ada2a538e2a82] <==
	I1027 22:25:53.886712       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 22:25:53.886812       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:25:53.886869       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:25:53.886917       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:25:53.886945       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:25:53.886932       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 22:25:53.886548       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 22:25:53.886953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 22:25:53.886968       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 22:25:53.893411       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 22:25:53.893535       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 22:25:53.899819       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:25:53.909397       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 22:25:53.911142       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 22:25:53.914747       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:25:53.917316       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 22:25:53.919625       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:25:53.931239       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 22:25:53.931248       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 22:25:53.931265       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:25:53.932116       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:25:53.932245       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-812436"
	I1027 22:25:53.932321       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:25:53.931925       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:25:53.935890       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-controller-manager [6c2ada7036c3ae3dc827bee6229abaa4b627d7c3321730ee4b69686ed7112341] <==
	I1027 22:25:20.026873       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 22:25:20.026973       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 22:25:20.027021       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 22:25:20.027052       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 22:25:20.027390       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 22:25:20.029386       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 22:25:20.031602       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 22:25:20.032918       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 22:25:20.035603       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 22:25:20.042903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 22:25:20.049232       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 22:25:20.050463       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 22:25:20.052937       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 22:25:20.059478       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:25:20.059558       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 22:25:20.059568       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 22:25:20.060093       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 22:25:20.064542       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 22:25:20.066781       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 22:25:20.068305       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 22:25:20.068702       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 22:25:20.068971       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 22:25:20.069116       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-812436"
	I1027 22:25:20.069331       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 22:25:20.077698       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [766a869e53d43a0586267473a6bd8864fc30a1384e040eb5605b2bdc8d5bbfbb] <==
	I1027 22:25:51.431336       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:25:51.561192       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:25:51.664599       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:25:51.664710       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 22:25:51.664813       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:25:51.736987       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:25:51.737112       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:25:51.740999       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:25:51.741686       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:25:51.741931       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:25:51.744235       1 config.go:200] "Starting service config controller"
	I1027 22:25:51.744320       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:25:51.744367       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:25:51.744409       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:25:51.744461       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:25:51.744501       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:25:51.745207       1 config.go:309] "Starting node config controller"
	I1027 22:25:51.745279       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:25:51.745311       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:25:51.844726       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:25:51.844732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:25:51.844764       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e585f5277bdf4192dd4a62d024d85e653f382a4ce3cf90090b32b507155228cb] <==
	I1027 22:25:10.286349       1 server_linux.go:53] "Using iptables proxy"
	I1027 22:25:14.339562       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 22:25:16.823947       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 22:25:16.823991       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 22:25:16.824058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 22:25:17.046059       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 22:25:17.046134       1 server_linux.go:132] "Using iptables Proxier"
	I1027 22:25:17.069007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 22:25:17.071707       1 server.go:527] "Version info" version="v1.34.1"
	I1027 22:25:17.071728       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:25:17.076109       1 config.go:200] "Starting service config controller"
	I1027 22:25:17.076141       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 22:25:17.111649       1 config.go:106] "Starting endpoint slice config controller"
	I1027 22:25:17.111882       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 22:25:17.111957       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 22:25:17.111963       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 22:25:17.112438       1 config.go:309] "Starting node config controller"
	I1027 22:25:17.112445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 22:25:17.112451       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 22:25:17.176212       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 22:25:17.213948       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 22:25:17.213995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f82ea9ab0da84639bb7f8e732f5c9a3ca84e4aa17e4957019f38f9cdc40e5ae] <==
	I1027 22:25:12.637637       1 serving.go:386] Generated self-signed cert in-memory
	I1027 22:25:16.867755       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:25:16.867784       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:25:16.894087       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:25:16.894220       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 22:25:16.894237       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 22:25:16.894267       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:25:16.905475       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:25:16.905511       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:25:16.905532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:25:16.905538       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:25:16.995105       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 22:25:17.006886       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:25:17.006974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:25:34.110631       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 22:25:34.110693       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:25:34.110717       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 22:25:34.110734       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1027 22:25:34.111084       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 22:25:34.111102       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 22:25:34.111113       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 22:25:34.111160       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [1fc8b9e27699e10015d173586e7449444c1e5a49625dfe122a23fb21eda3458a] <==
	I1027 22:25:49.555552       1 serving.go:386] Generated self-signed cert in-memory
	W1027 22:25:50.484252       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 22:25:50.484363       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 22:25:50.484398       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 22:25:50.484443       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 22:25:50.533896       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 22:25:50.533938       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 22:25:50.543817       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 22:25:50.548379       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:25:50.548500       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 22:25:50.548591       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 22:25:50.650514       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 22:33:48 functional-812436 kubelet[3829]: E1027 22:33:48.944518    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:33:50 functional-812436 kubelet[3829]: E1027 22:33:50.944370    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:34:03 functional-812436 kubelet[3829]: E1027 22:34:03.943862    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:34:04 functional-812436 kubelet[3829]: E1027 22:34:04.944563    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:34:18 functional-812436 kubelet[3829]: E1027 22:34:18.943718    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:34:18 functional-812436 kubelet[3829]: E1027 22:34:18.944795    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:34:29 functional-812436 kubelet[3829]: E1027 22:34:29.943482    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:34:32 functional-812436 kubelet[3829]: E1027 22:34:32.943617    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:34:41 functional-812436 kubelet[3829]: E1027 22:34:41.943266    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:34:44 functional-812436 kubelet[3829]: E1027 22:34:44.944388    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:34:54 functional-812436 kubelet[3829]: E1027 22:34:54.944757    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:34:59 functional-812436 kubelet[3829]: E1027 22:34:59.943724    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:35:08 functional-812436 kubelet[3829]: E1027 22:35:08.944252    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:35:10 functional-812436 kubelet[3829]: E1027 22:35:10.943944    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:35:21 functional-812436 kubelet[3829]: E1027 22:35:21.944051    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:35:22 functional-812436 kubelet[3829]: E1027 22:35:22.943622    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:35:33 functional-812436 kubelet[3829]: E1027 22:35:33.944162    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:35:37 functional-812436 kubelet[3829]: E1027 22:35:37.943662    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:35:48 functional-812436 kubelet[3829]: E1027 22:35:48.944785    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:35:52 functional-812436 kubelet[3829]: E1027 22:35:52.945734    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:36:02 functional-812436 kubelet[3829]: E1027 22:36:02.943877    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:36:04 functional-812436 kubelet[3829]: E1027 22:36:04.943686    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:36:14 functional-812436 kubelet[3829]: E1027 22:36:14.944131    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	Oct 27 22:36:18 functional-812436 kubelet[3829]: E1027 22:36:18.944154    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-fxp5d" podUID="56194524-a1a3-4553-9473-247177d28d76"
	Oct 27 22:36:25 functional-812436 kubelet[3829]: E1027 22:36:25.944110    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-m6t8s" podUID="17fe6bf7-6244-4ced-aa3d-fc843f5d69f0"
	
	
	==> storage-provisioner [25b07d3cda2e65a74f7139fc6c362557732b15bcf65ddb66dd21478b5282dccf] <==
	I1027 22:25:10.959639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 22:25:16.689704       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 22:25:16.689842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 22:25:16.755552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:25:20.234109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:25:24.494419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:25:28.093266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:25:31.146545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [caf56290ab1d990b0803edeac42c9e1e9069812a839e874a1f9b9e4abd8dfc87] <==
	W1027 22:36:01.561618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:03.564451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:03.569024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:05.572381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:05.576810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:07.579627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:07.585973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:09.588835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:09.593119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:11.596317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:11.600863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:13.604863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:13.609700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:15.612923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:15.617428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:17.620178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:17.624357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:19.628018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:19.632912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:21.635396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:21.642244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:23.645245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:23.649980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:25.654168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 22:36:25.659015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-812436 -n functional-812436
helpers_test.go:269: (dbg) Run:  kubectl --context functional-812436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-fxp5d hello-node-connect-7d85dfc575-m6t8s
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-812436 describe pod hello-node-75c85bcc94-fxp5d hello-node-connect-7d85dfc575-m6t8s
helpers_test.go:290: (dbg) kubectl --context functional-812436 describe pod hello-node-75c85bcc94-fxp5d hello-node-connect-7d85dfc575-m6t8s:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-fxp5d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-812436/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 22:26:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d4qdj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-d4qdj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fxp5d to functional-812436
	  Normal   Pulling    7m1s (x5 over 9m47s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 9m47s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 9m47s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m36s (x20 over 9m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m24s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-m6t8s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-812436/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 22:26:24 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dntfq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dntfq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-m6t8s to functional-812436
	  Normal   Pulling    7m5s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m5s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x43 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-812436 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-812436 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fxp5d" [56194524-a1a3-4553-9473-247177d28d76] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1027 22:27:01.963474 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:18.104693 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:29:45.805638 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:34:18.104637 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-812436 -n functional-812436
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-27 22:36:41.139221815 +0000 UTC m=+1231.771426238
functional_test.go:1460: (dbg) Run:  kubectl --context functional-812436 describe po hello-node-75c85bcc94-fxp5d -n default
functional_test.go:1460: (dbg) kubectl --context functional-812436 describe po hello-node-75c85bcc94-fxp5d -n default:
Name:             hello-node-75c85bcc94-fxp5d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-812436/192.168.49.2
Start Time:       Mon, 27 Oct 2025 22:26:40 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d4qdj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-d4qdj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fxp5d to functional-812436
Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-812436 logs hello-node-75c85bcc94-fxp5d -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-812436 logs hello-node-75c85bcc94-fxp5d -n default: exit status 1 (94.208526ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-fxp5d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-812436 logs hello-node-75c85bcc94-fxp5d -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 service --namespace=default --https --url hello-node: exit status 115 (559.06831ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32555
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-812436 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 service hello-node --url --format={{.IP}}: exit status 115 (545.472725ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-812436 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 service hello-node --url: exit status 115 (490.1679ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32555
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-812436 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32555
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image load --daemon kicbase/echo-server:functional-812436 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 image load --daemon kicbase/echo-server:functional-812436 --alsologtostderr: (1.550994012s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-812436" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image load --daemon kicbase/echo-server:functional-812436 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-812436" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-812436
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image load --daemon kicbase/echo-server:functional-812436 --alsologtostderr
2025/10/27 22:36:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-812436" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image save kicbase/echo-server:functional-812436 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1027 22:36:55.362530 1162716 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:36:55.363780 1162716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:55.363843 1162716 out.go:374] Setting ErrFile to fd 2...
	I1027 22:36:55.363865 1162716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:55.364319 1162716 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:36:55.365559 1162716 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:36:55.365756 1162716 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:36:55.366561 1162716 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
	I1027 22:36:55.391566 1162716 ssh_runner.go:195] Run: systemctl --version
	I1027 22:36:55.391621 1162716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
	I1027 22:36:55.416550 1162716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
	I1027 22:36:55.525557 1162716 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1027 22:36:55.525626 1162716 cache_images.go:255] Failed to load cached images for "functional-812436": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1027 22:36:55.525648 1162716 cache_images.go:267] failed pushing to: functional-812436

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-812436
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image save --daemon kicbase/echo-server:functional-812436 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-812436
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-812436: exit status 1 (20.763276ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-812436

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-812436

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-635165 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-635165 --output=json --user=testUser: exit status 80 (2.522515474s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4490df6-fee4-4e81-a291-9e4958183d89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-635165 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"aa621545-34bb-427d-8551-085cb46398f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T22:49:41Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"ed0d389a-36dc-49d9-9114-ebf6bf676107","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-635165 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-635165 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-635165 --output=json --user=testUser: exit status 80 (1.648973968s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48baae53-e0da-4dc6-b168-afdade2f0c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-635165 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a4d15e9f-7ae8-484f-92ef-3895cfd28e1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T22:49:43Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"534fbbea-09ac-48f8-9890-d0d0bb995fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-635165 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.65s)

                                                
                                    
x
+
TestPause/serial/Pause (8.4s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-180608 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-180608 --alsologtostderr -v=5: exit status 80 (2.164042622s)

                                                
                                                
-- stdout --
	* Pausing node pause-180608 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:08:28.671729 1277931 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:08:28.678581 1277931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:08:28.678602 1277931 out.go:374] Setting ErrFile to fd 2...
	I1027 23:08:28.678608 1277931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:08:28.679044 1277931 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:08:28.679462 1277931 out.go:368] Setting JSON to false
	I1027 23:08:28.679531 1277931 mustload.go:66] Loading cluster: pause-180608
	I1027 23:08:28.680313 1277931 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:08:28.681054 1277931 cli_runner.go:164] Run: docker container inspect pause-180608 --format={{.State.Status}}
	I1027 23:08:28.711071 1277931 host.go:66] Checking if "pause-180608" exists ...
	I1027 23:08:28.711394 1277931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:08:28.814521 1277931 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 23:08:28.80262024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:08:28.815181 1277931 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-180608 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 23:08:28.818243 1277931 out.go:179] * Pausing node pause-180608 ... 
	I1027 23:08:28.822027 1277931 host.go:66] Checking if "pause-180608" exists ...
	I1027 23:08:28.822363 1277931 ssh_runner.go:195] Run: systemctl --version
	I1027 23:08:28.822510 1277931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:28.849627 1277931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:28.964321 1277931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:08:28.999873 1277931 pause.go:52] kubelet running: true
	I1027 23:08:28.999939 1277931 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:08:29.325558 1277931 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:08:29.325639 1277931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:08:29.411348 1277931 cri.go:89] found id: "2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359"
	I1027 23:08:29.411375 1277931 cri.go:89] found id: "53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e"
	I1027 23:08:29.411380 1277931 cri.go:89] found id: "eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e"
	I1027 23:08:29.411384 1277931 cri.go:89] found id: "021da40950a294110e4541f9cb8799f59a838a0c2abc0af7436a6bebd4c0e8cd"
	I1027 23:08:29.411388 1277931 cri.go:89] found id: "7c741dedb9b95b51a18a73a8bae03bfd6e03223aee5c148db0fb790cd53ee265"
	I1027 23:08:29.411391 1277931 cri.go:89] found id: "90838204b928c48a4dbbbe5ce5299e995c32585a66accba00603e5262d6cbb97"
	I1027 23:08:29.411394 1277931 cri.go:89] found id: "893e096fab0047978d7befba788f303c50255093c6b08e3b673897a4a72cf757"
	I1027 23:08:29.411397 1277931 cri.go:89] found id: "64d490196d16ba5e9e067647e6c057744f2984df8bb471f59101d483eb228168"
	I1027 23:08:29.411400 1277931 cri.go:89] found id: "1852461627d88419e9ec506bd983019b2d829ddf9c13e1acb0e9a1afeaa96a41"
	I1027 23:08:29.411406 1277931 cri.go:89] found id: "2b428d4b7e6fbf4f947b835d957fda754922104d7bf53f17c3783574eafa08d7"
	I1027 23:08:29.411409 1277931 cri.go:89] found id: "8e2099955fee832bae84d5ff137f8359811066bc9c95e88db65fd0ae081d7627"
	I1027 23:08:29.411412 1277931 cri.go:89] found id: "11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1"
	I1027 23:08:29.411416 1277931 cri.go:89] found id: "190b5dd4515332ce06bf30b75f07111cc7134d2b22bc385fb9a47744a7ced680"
	I1027 23:08:29.411427 1277931 cri.go:89] found id: "ccf3881ff1ed45bc8d78cb82b817e75eea09bf871e82ef8b5245f5a2cf9233f2"
	I1027 23:08:29.411433 1277931 cri.go:89] found id: ""
	I1027 23:08:29.411484 1277931 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:08:29.423634 1277931 retry.go:31] will retry after 213.31923ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:08:29Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:08:29.638057 1277931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:08:29.655631 1277931 pause.go:52] kubelet running: false
	I1027 23:08:29.655752 1277931 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:08:29.851451 1277931 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:08:29.851597 1277931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:08:29.949333 1277931 cri.go:89] found id: "2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359"
	I1027 23:08:29.949423 1277931 cri.go:89] found id: "53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e"
	I1027 23:08:29.949443 1277931 cri.go:89] found id: "eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e"
	I1027 23:08:29.949463 1277931 cri.go:89] found id: "021da40950a294110e4541f9cb8799f59a838a0c2abc0af7436a6bebd4c0e8cd"
	I1027 23:08:29.949498 1277931 cri.go:89] found id: "7c741dedb9b95b51a18a73a8bae03bfd6e03223aee5c148db0fb790cd53ee265"
	I1027 23:08:29.949522 1277931 cri.go:89] found id: "90838204b928c48a4dbbbe5ce5299e995c32585a66accba00603e5262d6cbb97"
	I1027 23:08:29.949543 1277931 cri.go:89] found id: "893e096fab0047978d7befba788f303c50255093c6b08e3b673897a4a72cf757"
	I1027 23:08:29.949576 1277931 cri.go:89] found id: "64d490196d16ba5e9e067647e6c057744f2984df8bb471f59101d483eb228168"
	I1027 23:08:29.949598 1277931 cri.go:89] found id: "1852461627d88419e9ec506bd983019b2d829ddf9c13e1acb0e9a1afeaa96a41"
	I1027 23:08:29.949618 1277931 cri.go:89] found id: "2b428d4b7e6fbf4f947b835d957fda754922104d7bf53f17c3783574eafa08d7"
	I1027 23:08:29.949638 1277931 cri.go:89] found id: "8e2099955fee832bae84d5ff137f8359811066bc9c95e88db65fd0ae081d7627"
	I1027 23:08:29.949673 1277931 cri.go:89] found id: "11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1"
	I1027 23:08:29.949691 1277931 cri.go:89] found id: "190b5dd4515332ce06bf30b75f07111cc7134d2b22bc385fb9a47744a7ced680"
	I1027 23:08:29.949708 1277931 cri.go:89] found id: "ccf3881ff1ed45bc8d78cb82b817e75eea09bf871e82ef8b5245f5a2cf9233f2"
	I1027 23:08:29.949740 1277931 cri.go:89] found id: ""
	I1027 23:08:29.949830 1277931 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:08:29.961470 1277931 retry.go:31] will retry after 320.512277ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:08:29Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:08:30.283038 1277931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:08:30.304865 1277931 pause.go:52] kubelet running: false
	I1027 23:08:30.305010 1277931 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:08:30.548602 1277931 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:08:30.548764 1277931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:08:30.702267 1277931 cri.go:89] found id: "2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359"
	I1027 23:08:30.702349 1277931 cri.go:89] found id: "53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e"
	I1027 23:08:30.702396 1277931 cri.go:89] found id: "eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e"
	I1027 23:08:30.702421 1277931 cri.go:89] found id: "021da40950a294110e4541f9cb8799f59a838a0c2abc0af7436a6bebd4c0e8cd"
	I1027 23:08:30.702466 1277931 cri.go:89] found id: "7c741dedb9b95b51a18a73a8bae03bfd6e03223aee5c148db0fb790cd53ee265"
	I1027 23:08:30.702493 1277931 cri.go:89] found id: "90838204b928c48a4dbbbe5ce5299e995c32585a66accba00603e5262d6cbb97"
	I1027 23:08:30.702510 1277931 cri.go:89] found id: "893e096fab0047978d7befba788f303c50255093c6b08e3b673897a4a72cf757"
	I1027 23:08:30.702540 1277931 cri.go:89] found id: "64d490196d16ba5e9e067647e6c057744f2984df8bb471f59101d483eb228168"
	I1027 23:08:30.702564 1277931 cri.go:89] found id: "1852461627d88419e9ec506bd983019b2d829ddf9c13e1acb0e9a1afeaa96a41"
	I1027 23:08:30.702590 1277931 cri.go:89] found id: "2b428d4b7e6fbf4f947b835d957fda754922104d7bf53f17c3783574eafa08d7"
	I1027 23:08:30.702623 1277931 cri.go:89] found id: "8e2099955fee832bae84d5ff137f8359811066bc9c95e88db65fd0ae081d7627"
	I1027 23:08:30.702646 1277931 cri.go:89] found id: "11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1"
	I1027 23:08:30.702664 1277931 cri.go:89] found id: "190b5dd4515332ce06bf30b75f07111cc7134d2b22bc385fb9a47744a7ced680"
	I1027 23:08:30.702685 1277931 cri.go:89] found id: "ccf3881ff1ed45bc8d78cb82b817e75eea09bf871e82ef8b5245f5a2cf9233f2"
	I1027 23:08:30.702715 1277931 cri.go:89] found id: ""
	I1027 23:08:30.702803 1277931 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:08:30.726958 1277931 out.go:203] 
	W1027 23:08:30.729859 1277931 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:08:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:08:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 23:08:30.729879 1277931 out.go:285] * 
	* 
	W1027 23:08:30.742209 1277931 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:08:30.746780 1277931 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-180608 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-180608
helpers_test.go:243: (dbg) docker inspect pause-180608:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee",
	        "Created": "2025-10-27T23:06:30.896490938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1265475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:06:30.993227097Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/hostname",
	        "HostsPath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/hosts",
	        "LogPath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee-json.log",
	        "Name": "/pause-180608",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-180608:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-180608",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee",
	                "LowerDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-180608",
	                "Source": "/var/lib/docker/volumes/pause-180608/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-180608",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-180608",
	                "name.minikube.sigs.k8s.io": "pause-180608",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b4c1962cdb455d35aa42f4d6268d85c18ce64bfaeda7756e54df47ea8e96bbe6",
	            "SandboxKey": "/var/run/docker/netns/b4c1962cdb45",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34452"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-180608": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:cb:23:25:68:2e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e57e66724fdc9a76ce4a5e6d71596915361b300178f7a0743fab0d1d0bf19ab8",
	                    "EndpointID": "6ff078f3c7d6e42420fb1106a39e32900146765df7825684a280c17b54e33407",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-180608",
	                        "5efefd6988c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-180608 -n pause-180608
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-180608 -n pause-180608: exit status 2 (536.025375ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-180608 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-180608 logs -n 25: (1.848867458s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-440075 sudo systemctl cat kubelet --no-pager                                                     │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status docker --all --full --no-pager                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat docker --no-pager                                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/docker/daemon.json                                                          │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo docker system info                                                                   │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cri-dockerd --version                                                                │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat containerd --no-pager                                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/containerd/config.toml                                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo containerd config dump                                                               │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status crio --all --full --no-pager                                        │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat crio --no-pager                                                        │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo crio config                                                                          │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ delete  │ -p cilium-440075                                                                                           │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │ 27 Oct 25 23:07 UTC │
	│ start   │ -p force-systemd-env-179399 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-179399 │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ pause   │ -p pause-180608 --alsologtostderr -v=5                                                                     │ pause-180608             │ jenkins │ v1.37.0 │ 27 Oct 25 23:08 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:07:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:07:58.382003 1275118 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:07:58.382449 1275118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:07:58.382488 1275118 out.go:374] Setting ErrFile to fd 2...
	I1027 23:07:58.382508 1275118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:07:58.382796 1275118 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:07:58.383276 1275118 out.go:368] Setting JSON to false
	I1027 23:07:58.384198 1275118 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21028,"bootTime":1761585451,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:07:58.384302 1275118 start.go:143] virtualization:  
	I1027 23:07:58.387739 1275118 out.go:179] * [force-systemd-env-179399] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:07:58.391995 1275118 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:07:58.392097 1275118 notify.go:221] Checking for updates...
	I1027 23:07:58.397820 1275118 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:07:58.400730 1275118 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:07:58.403708 1275118 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:07:58.406654 1275118 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:07:58.409560 1275118 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1027 23:07:58.413108 1275118 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:07:58.413232 1275118 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:07:58.443885 1275118 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:07:58.444015 1275118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:07:58.520694 1275118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:07:58.51150282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:07:58.520813 1275118 docker.go:318] overlay module found
	I1027 23:07:58.525877 1275118 out.go:179] * Using the docker driver based on user configuration
	I1027 23:07:58.528766 1275118 start.go:307] selected driver: docker
	I1027 23:07:58.528790 1275118 start.go:928] validating driver "docker" against <nil>
	I1027 23:07:58.528821 1275118 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:07:58.529708 1275118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:07:58.581660 1275118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:07:58.572044469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:07:58.581827 1275118 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 23:07:58.582089 1275118 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 23:07:58.584935 1275118 out.go:179] * Using Docker driver with root privileges
	I1027 23:07:58.587714 1275118 cni.go:84] Creating CNI manager for ""
	I1027 23:07:58.587780 1275118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:07:58.587794 1275118 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 23:07:58.587870 1275118 start.go:351] cluster config:
	{Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:07:58.591004 1275118 out.go:179] * Starting "force-systemd-env-179399" primary control-plane node in "force-systemd-env-179399" cluster
	I1027 23:07:58.593763 1275118 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:07:58.596639 1275118 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:07:58.599437 1275118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:07:58.599493 1275118 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:07:58.599506 1275118 cache.go:59] Caching tarball of preloaded images
	I1027 23:07:58.599520 1275118 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:07:58.599587 1275118 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:07:58.599597 1275118 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:07:58.599710 1275118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/config.json ...
	I1027 23:07:58.599732 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/config.json: {Name:mk83428b2aa61453697f46bac5df6e9ebab70e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:07:58.618476 1275118 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:07:58.618499 1275118 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:07:58.618519 1275118 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:07:58.618543 1275118 start.go:360] acquireMachinesLock for force-systemd-env-179399: {Name:mkb2557f6b9cf7bc1dd1a195fbe38189a74b4ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:07:58.618657 1275118 start.go:364] duration metric: took 92.843µs to acquireMachinesLock for "force-systemd-env-179399"
	I1027 23:07:58.618693 1275118 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:07:58.618764 1275118 start.go:125] createHost starting for "" (driver="docker")
	I1027 23:07:55.930948 1274679 out.go:252] * Updating the running docker "pause-180608" container ...
	I1027 23:07:55.930981 1274679 machine.go:94] provisionDockerMachine start ...
	I1027 23:07:55.931060 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:55.956401 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:55.956722 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:55.956737 1274679 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:07:56.118093 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-180608
	
	I1027 23:07:56.118121 1274679 ubuntu.go:182] provisioning hostname "pause-180608"
	I1027 23:07:56.118194 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:56.161213 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:56.161516 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:56.161527 1274679 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-180608 && echo "pause-180608" | sudo tee /etc/hostname
	I1027 23:07:56.339361 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-180608
	
	I1027 23:07:56.339432 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:56.366811 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:56.367100 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:56.367115 1274679 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-180608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-180608/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-180608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:07:56.538490 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:07:56.538517 1274679 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:07:56.538549 1274679 ubuntu.go:190] setting up certificates
	I1027 23:07:56.538559 1274679 provision.go:84] configureAuth start
	I1027 23:07:56.538629 1274679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-180608
	I1027 23:07:56.562998 1274679 provision.go:143] copyHostCerts
	I1027 23:07:56.563065 1274679 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:07:56.563087 1274679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:07:56.563168 1274679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:07:56.563267 1274679 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:07:56.563279 1274679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:07:56.563306 1274679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:07:56.563403 1274679 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:07:56.563413 1274679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:07:56.563438 1274679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:07:56.563492 1274679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.pause-180608 san=[127.0.0.1 192.168.76.2 localhost minikube pause-180608]
	I1027 23:07:57.401131 1274679 provision.go:177] copyRemoteCerts
	I1027 23:07:57.401251 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:07:57.401300 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:57.420121 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:07:57.540396 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 23:07:57.562539 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:07:57.584892 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:07:57.605900 1274679 provision.go:87] duration metric: took 1.067319329s to configureAuth
	I1027 23:07:57.605928 1274679 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:07:57.606164 1274679 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:07:57.606262 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:57.636038 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:57.636344 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:57.636359 1274679 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:07:58.622047 1275118 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 23:07:58.622293 1275118 start.go:159] libmachine.API.Create for "force-systemd-env-179399" (driver="docker")
	I1027 23:07:58.622340 1275118 client.go:173] LocalClient.Create starting
	I1027 23:07:58.622439 1275118 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem
	I1027 23:07:58.622482 1275118 main.go:143] libmachine: Decoding PEM data...
	I1027 23:07:58.622504 1275118 main.go:143] libmachine: Parsing certificate...
	I1027 23:07:58.622570 1275118 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem
	I1027 23:07:58.622594 1275118 main.go:143] libmachine: Decoding PEM data...
	I1027 23:07:58.622604 1275118 main.go:143] libmachine: Parsing certificate...
	I1027 23:07:58.623006 1275118 cli_runner.go:164] Run: docker network inspect force-systemd-env-179399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 23:07:58.640447 1275118 cli_runner.go:211] docker network inspect force-systemd-env-179399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 23:07:58.640550 1275118 network_create.go:284] running [docker network inspect force-systemd-env-179399] to gather additional debugging logs...
	I1027 23:07:58.640568 1275118 cli_runner.go:164] Run: docker network inspect force-systemd-env-179399
	W1027 23:07:58.657276 1275118 cli_runner.go:211] docker network inspect force-systemd-env-179399 returned with exit code 1
	I1027 23:07:58.657304 1275118 network_create.go:287] error running [docker network inspect force-systemd-env-179399]: docker network inspect force-systemd-env-179399: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-179399 not found
	I1027 23:07:58.657335 1275118 network_create.go:289] output of [docker network inspect force-systemd-env-179399]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-179399 not found
	
	** /stderr **
	I1027 23:07:58.657434 1275118 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:07:58.674338 1275118 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bec5bade6d32 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:b8:32:37:d1:1a} reservation:<nil>}
	I1027 23:07:58.674655 1275118 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0dc359f1a23c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:03:b5:bc:b2:ab} reservation:<nil>}
	I1027 23:07:58.674963 1275118 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6865072e7c41 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:f3:83:1f:14:0e} reservation:<nil>}
	I1027 23:07:58.675282 1275118 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e57e66724fdc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:bd:b1:42:6d:9f} reservation:<nil>}
	I1027 23:07:58.675687 1275118 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3f0c0}
	I1027 23:07:58.675716 1275118 network_create.go:124] attempt to create docker network force-systemd-env-179399 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 23:07:58.675773 1275118 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-179399 force-systemd-env-179399
	I1027 23:07:58.736029 1275118 network_create.go:108] docker network force-systemd-env-179399 192.168.85.0/24 created
	I1027 23:07:58.736061 1275118 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-179399" container
	I1027 23:07:58.736146 1275118 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 23:07:58.752904 1275118 cli_runner.go:164] Run: docker volume create force-systemd-env-179399 --label name.minikube.sigs.k8s.io=force-systemd-env-179399 --label created_by.minikube.sigs.k8s.io=true
	I1027 23:07:58.772427 1275118 oci.go:103] Successfully created a docker volume force-systemd-env-179399
	I1027 23:07:58.772524 1275118 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-179399-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-179399 --entrypoint /usr/bin/test -v force-systemd-env-179399:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 23:07:59.289240 1275118 oci.go:107] Successfully prepared a docker volume force-systemd-env-179399
	I1027 23:07:59.289289 1275118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:07:59.289309 1275118 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 23:07:59.289388 1275118 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-179399:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 23:08:03.256030 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:08:03.256052 1274679 machine.go:97] duration metric: took 7.325062268s to provisionDockerMachine
	I1027 23:08:03.256063 1274679 start.go:293] postStartSetup for "pause-180608" (driver="docker")
	I1027 23:08:03.256074 1274679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:08:03.256136 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:08:03.256195 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.274809 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.378961 1274679 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:08:03.382707 1274679 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:08:03.382786 1274679 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:08:03.382812 1274679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:08:03.382886 1274679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:08:03.382979 1274679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:08:03.383083 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:08:03.390637 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:03.409786 1274679 start.go:296] duration metric: took 153.707383ms for postStartSetup
	I1027 23:08:03.409887 1274679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:08:03.409947 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.428706 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.532006 1274679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:08:03.537316 1274679 fix.go:57] duration metric: took 7.639402094s for fixHost
	I1027 23:08:03.537343 1274679 start.go:83] releasing machines lock for "pause-180608", held for 7.639455764s
	I1027 23:08:03.537411 1274679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-180608
	I1027 23:08:03.554616 1274679 ssh_runner.go:195] Run: cat /version.json
	I1027 23:08:03.554674 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.554736 1274679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:08:03.554810 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.575301 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.578446 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.768933 1274679 ssh_runner.go:195] Run: systemctl --version
	I1027 23:08:03.775656 1274679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:08:03.826355 1274679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:08:03.832274 1274679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:08:03.832396 1274679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:08:03.840748 1274679 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:08:03.840780 1274679 start.go:496] detecting cgroup driver to use...
	I1027 23:08:03.840833 1274679 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:08:03.840897 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:08:03.856977 1274679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:08:03.871439 1274679 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:08:03.871524 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:08:03.889001 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:08:03.902958 1274679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:08:04.048484 1274679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:08:04.237985 1274679 docker.go:234] disabling docker service ...
	I1027 23:08:04.238064 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:08:04.256902 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:08:04.281270 1274679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:08:04.480505 1274679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:08:04.724739 1274679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:08:04.758043 1274679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:08:04.794677 1274679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:08:04.794747 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.811267 1274679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:08:04.811354 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.833069 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.859024 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.873141 1274679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:08:04.886017 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.896833 1274679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.917898 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.930227 1274679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:08:04.948342 1274679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:08:04.965233 1274679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:05.332543 1274679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:08:05.713719 1274679 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:08:05.713786 1274679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:08:05.717769 1274679 start.go:564] Will wait 60s for crictl version
	I1027 23:08:05.717844 1274679 ssh_runner.go:195] Run: which crictl
	I1027 23:08:05.722972 1274679 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:08:05.762700 1274679 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:08:05.762787 1274679 ssh_runner.go:195] Run: crio --version
	I1027 23:08:05.808161 1274679 ssh_runner.go:195] Run: crio --version
	I1027 23:08:05.854047 1274679 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:08:05.857230 1274679 cli_runner.go:164] Run: docker network inspect pause-180608 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:08:05.881635 1274679 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 23:08:05.886449 1274679 kubeadm.go:884] updating cluster {Name:pause-180608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-180608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:08:05.886592 1274679 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:08:05.886645 1274679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:05.940329 1274679 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:05.940351 1274679 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:08:05.940409 1274679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:05.988404 1274679 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:05.988486 1274679 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:08:05.988509 1274679 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 23:08:05.988641 1274679 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-180608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-180608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:08:05.988796 1274679 ssh_runner.go:195] Run: crio config
	I1027 23:08:06.057416 1274679 cni.go:84] Creating CNI manager for ""
	I1027 23:08:06.057493 1274679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:08:06.057526 1274679 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:08:06.057581 1274679 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-180608 NodeName:pause-180608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:08:06.057763 1274679 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-180608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:08:06.057882 1274679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:08:06.069271 1274679 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:08:06.069345 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:08:06.080447 1274679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 23:08:06.098633 1274679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:08:06.116956 1274679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 23:08:06.134552 1274679 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:08:06.139628 1274679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:06.306416 1274679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:08:06.319676 1274679 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608 for IP: 192.168.76.2
	I1027 23:08:06.319759 1274679 certs.go:195] generating shared ca certs ...
	I1027 23:08:06.319797 1274679 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:06.319972 1274679 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:08:06.320042 1274679 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:08:06.320066 1274679 certs.go:257] generating profile certs ...
	I1027 23:08:06.320176 1274679 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key
	I1027 23:08:06.320289 1274679 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/apiserver.key.8063c8c5
	I1027 23:08:06.320372 1274679 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/proxy-client.key
	I1027 23:08:06.320502 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:08:06.320568 1274679 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:08:06.320603 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:08:06.320658 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:08:06.320722 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:08:06.320766 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:08:06.320849 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:06.321538 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:08:06.340898 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:08:06.358354 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:08:06.376336 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:08:06.394040 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 23:08:06.419502 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:08:06.437614 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:08:06.456046 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:08:06.475815 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:08:06.494485 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:08:06.513196 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:08:06.531179 1274679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:08:06.546802 1274679 ssh_runner.go:195] Run: openssl version
	I1027 23:08:06.553249 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:08:06.562027 1274679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:08:06.565925 1274679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:08:06.566014 1274679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:08:06.613738 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:08:06.622605 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:08:06.632100 1274679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:08:06.636070 1274679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:08:06.636180 1274679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:08:06.677370 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:08:06.685420 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:08:06.693628 1274679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:06.697466 1274679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:06.697563 1274679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:06.740560 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:08:06.748942 1274679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:08:06.752895 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:08:06.793791 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:08:06.835055 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:08:06.876514 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:08:06.917911 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:08:06.959314 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:08:07.000791 1274679 kubeadm.go:401] StartCluster: {Name:pause-180608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-180608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:08:07.000926 1274679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:08:07.000999 1274679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:08:07.034058 1274679 cri.go:89] found id: "64d490196d16ba5e9e067647e6c057744f2984df8bb471f59101d483eb228168"
	I1027 23:08:07.034080 1274679 cri.go:89] found id: "1852461627d88419e9ec506bd983019b2d829ddf9c13e1acb0e9a1afeaa96a41"
	I1027 23:08:07.034085 1274679 cri.go:89] found id: "2b428d4b7e6fbf4f947b835d957fda754922104d7bf53f17c3783574eafa08d7"
	I1027 23:08:07.034089 1274679 cri.go:89] found id: "8e2099955fee832bae84d5ff137f8359811066bc9c95e88db65fd0ae081d7627"
	I1027 23:08:07.034093 1274679 cri.go:89] found id: "11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1"
	I1027 23:08:07.034096 1274679 cri.go:89] found id: "190b5dd4515332ce06bf30b75f07111cc7134d2b22bc385fb9a47744a7ced680"
	I1027 23:08:07.034098 1274679 cri.go:89] found id: "ccf3881ff1ed45bc8d78cb82b817e75eea09bf871e82ef8b5245f5a2cf9233f2"
	I1027 23:08:07.034101 1274679 cri.go:89] found id: ""
	I1027 23:08:07.034169 1274679 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:08:07.045499 1274679 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:08:07Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:08:07.045581 1274679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:08:07.053825 1274679 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:08:07.053847 1274679 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:08:07.053926 1274679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:08:07.062123 1274679 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:08:07.062808 1274679 kubeconfig.go:125] found "pause-180608" server: "https://192.168.76.2:8443"
	I1027 23:08:07.063383 1274679 kapi.go:59] client config for pause-180608: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key", CAFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21204e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 23:08:07.063874 1274679 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 23:08:07.063893 1274679 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 23:08:07.063899 1274679 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 23:08:07.063907 1274679 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 23:08:07.063912 1274679 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 23:08:07.064170 1274679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:08:07.072177 1274679 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 23:08:07.072210 1274679 kubeadm.go:602] duration metric: took 18.357098ms to restartPrimaryControlPlane
	I1027 23:08:07.072220 1274679 kubeadm.go:403] duration metric: took 71.454804ms to StartCluster
	I1027 23:08:07.072234 1274679 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:07.072313 1274679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:08:07.072939 1274679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:07.073167 1274679 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:08:07.073506 1274679 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:08:07.073556 1274679 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:08:07.078630 1274679 out.go:179] * Verifying Kubernetes components...
	I1027 23:08:07.078702 1274679 out.go:179] * Enabled addons: 
	I1027 23:08:04.083053 1275118 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-179399:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.793620521s)
	I1027 23:08:04.083085 1275118 kic.go:203] duration metric: took 4.79377213s to extract preloaded images to volume ...
	W1027 23:08:04.083250 1275118 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 23:08:04.083359 1275118 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 23:08:04.192939 1275118 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-179399 --name force-systemd-env-179399 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-179399 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-179399 --network force-systemd-env-179399 --ip 192.168.85.2 --volume force-systemd-env-179399:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 23:08:04.577431 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Running}}
	I1027 23:08:04.599648 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Status}}
	I1027 23:08:04.625272 1275118 cli_runner.go:164] Run: docker exec force-systemd-env-179399 stat /var/lib/dpkg/alternatives/iptables
	I1027 23:08:04.689224 1275118 oci.go:144] the created container "force-systemd-env-179399" has a running status.
	I1027 23:08:04.689259 1275118 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa...
	I1027 23:08:05.434439 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1027 23:08:05.434484 1275118 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 23:08:05.461757 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Status}}
	I1027 23:08:05.486255 1275118 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 23:08:05.486275 1275118 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-179399 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 23:08:05.560840 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Status}}
	I1027 23:08:05.588047 1275118 machine.go:94] provisionDockerMachine start ...
	I1027 23:08:05.588148 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:05.615631 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:05.615970 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:05.615979 1275118 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:08:05.616751 1275118 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53666->127.0.0.1:34469: read: connection reset by peer
	I1027 23:08:07.082249 1274679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:07.082413 1274679 addons.go:514] duration metric: took 8.825526ms for enable addons: enabled=[]
	I1027 23:08:07.211285 1274679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:08:07.224959 1274679 node_ready.go:35] waiting up to 6m0s for node "pause-180608" to be "Ready" ...
	W1027 23:08:09.225548 1274679 node_ready.go:55] error getting node "pause-180608" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-180608": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 23:08:08.770059 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-179399
	
	I1027 23:08:08.770125 1275118 ubuntu.go:182] provisioning hostname "force-systemd-env-179399"
	I1027 23:08:08.770196 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:08.787603 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:08.787929 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:08.787947 1275118 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-179399 && echo "force-systemd-env-179399" | sudo tee /etc/hostname
	I1027 23:08:08.948331 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-179399
	
	I1027 23:08:08.948411 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:08.965915 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:08.966282 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:08.966307 1275118 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-179399' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-179399/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-179399' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:08:09.118885 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:08:09.118960 1275118 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:08:09.118998 1275118 ubuntu.go:190] setting up certificates
	I1027 23:08:09.119037 1275118 provision.go:84] configureAuth start
	I1027 23:08:09.119148 1275118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-179399
	I1027 23:08:09.136260 1275118 provision.go:143] copyHostCerts
	I1027 23:08:09.136307 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:08:09.136340 1275118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:08:09.136347 1275118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:08:09.136423 1275118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:08:09.136498 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:08:09.136514 1275118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:08:09.136518 1275118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:08:09.136542 1275118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:08:09.136579 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:08:09.136595 1275118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:08:09.136599 1275118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:08:09.136620 1275118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:08:09.136663 1275118 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-179399 san=[127.0.0.1 192.168.85.2 force-systemd-env-179399 localhost minikube]
	I1027 23:08:10.172774 1275118 provision.go:177] copyRemoteCerts
	I1027 23:08:10.172857 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:08:10.172907 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:10.196035 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:10.327690 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1027 23:08:10.327754 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:08:10.360125 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1027 23:08:10.360186 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1027 23:08:10.388988 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1027 23:08:10.389053 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 23:08:10.416149 1275118 provision.go:87] duration metric: took 1.297068472s to configureAuth
	I1027 23:08:10.416178 1275118 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:08:10.416348 1275118 config.go:182] Loaded profile config "force-systemd-env-179399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:08:10.416468 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:10.443858 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:10.444178 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:10.444199 1275118 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:08:10.827633 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:08:10.827664 1275118 machine.go:97] duration metric: took 5.239597227s to provisionDockerMachine
	I1027 23:08:10.827674 1275118 client.go:176] duration metric: took 12.205322951s to LocalClient.Create
	I1027 23:08:10.827688 1275118 start.go:167] duration metric: took 12.205396897s to libmachine.API.Create "force-systemd-env-179399"
	I1027 23:08:10.827699 1275118 start.go:293] postStartSetup for "force-systemd-env-179399" (driver="docker")
	I1027 23:08:10.827710 1275118 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:08:10.827785 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:08:10.827830 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:10.863923 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:10.995539 1275118 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:08:10.999002 1275118 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:08:10.999028 1275118 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:08:10.999039 1275118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:08:10.999095 1275118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:08:10.999177 1275118 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:08:10.999183 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> /etc/ssl/certs/11347352.pem
	I1027 23:08:10.999305 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:08:11.007829 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:11.026710 1275118 start.go:296] duration metric: took 198.981068ms for postStartSetup
	I1027 23:08:11.027072 1275118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-179399
	I1027 23:08:11.045732 1275118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/config.json ...
	I1027 23:08:11.046005 1275118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:08:11.046068 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:11.071989 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:11.195005 1275118 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:08:11.203341 1275118 start.go:128] duration metric: took 12.584561389s to createHost
	I1027 23:08:11.203367 1275118 start.go:83] releasing machines lock for "force-systemd-env-179399", held for 12.584692968s
	I1027 23:08:11.203439 1275118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-179399
	I1027 23:08:11.231962 1275118 ssh_runner.go:195] Run: cat /version.json
	I1027 23:08:11.232015 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:11.232043 1275118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:08:11.232111 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:11.259758 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:11.284421 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:11.382015 1275118 ssh_runner.go:195] Run: systemctl --version
	I1027 23:08:11.521380 1275118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:08:11.601101 1275118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:08:11.607880 1275118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:08:11.607952 1275118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:08:11.649990 1275118 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 23:08:11.650012 1275118 start.go:496] detecting cgroup driver to use...
	I1027 23:08:11.650027 1275118 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1027 23:08:11.650081 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:08:11.669872 1275118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:08:11.684176 1275118 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:08:11.684289 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:08:11.702877 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:08:11.723469 1275118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:08:11.908003 1275118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:08:12.125202 1275118 docker.go:234] disabling docker service ...
	I1027 23:08:12.125277 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:08:12.155647 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:08:12.178577 1275118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:08:12.404667 1275118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:08:12.644212 1275118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:08:12.667630 1275118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:08:12.687296 1275118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:08:12.687363 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.699896 1275118 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 23:08:12.699964 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.714206 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.729011 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.740894 1275118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:08:12.755870 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.768728 1275118 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.793153 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.806243 1275118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:08:12.817651 1275118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:08:12.829997 1275118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:13.030565 1275118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:08:13.231275 1275118 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:08:13.231393 1275118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:08:13.237114 1275118 start.go:564] Will wait 60s for crictl version
	I1027 23:08:13.237234 1275118 ssh_runner.go:195] Run: which crictl
	I1027 23:08:13.241503 1275118 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:08:13.279148 1275118 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:08:13.279308 1275118 ssh_runner.go:195] Run: crio --version
	I1027 23:08:13.330817 1275118 ssh_runner.go:195] Run: crio --version
	I1027 23:08:13.381905 1275118 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:08:13.384940 1275118 cli_runner.go:164] Run: docker network inspect force-systemd-env-179399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:08:13.406589 1275118 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:08:13.411131 1275118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:08:13.426183 1275118 kubeadm.go:884] updating cluster {Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:08:13.426293 1275118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:08:13.426353 1275118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:13.492979 1275118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:13.493004 1275118 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:08:13.493062 1275118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:13.544526 1275118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:13.544546 1275118 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:08:13.544554 1275118 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:08:13.544657 1275118 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-179399 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:08:13.544735 1275118 ssh_runner.go:195] Run: crio config
	I1027 23:08:13.631193 1275118 cni.go:84] Creating CNI manager for ""
	I1027 23:08:13.631215 1275118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:08:13.631228 1275118 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:08:13.631251 1275118 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-179399 NodeName:force-systemd-env-179399 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:08:13.631397 1275118 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-179399"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:08:13.631479 1275118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:08:13.641258 1275118 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:08:13.641344 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:08:13.650739 1275118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1027 23:08:13.679606 1275118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:08:13.708410 1275118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1027 23:08:13.728178 1275118 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:08:13.732028 1275118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:08:13.747021 1275118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:13.935389 1275118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:08:13.971633 1275118 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399 for IP: 192.168.85.2
	I1027 23:08:13.971655 1275118 certs.go:195] generating shared ca certs ...
	I1027 23:08:13.971671 1275118 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:13.971803 1275118 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:08:13.971858 1275118 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:08:13.971867 1275118 certs.go:257] generating profile certs ...
	I1027 23:08:13.971930 1275118 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.key
	I1027 23:08:13.971945 1275118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.crt with IP's: []
	I1027 23:08:15.114154 1275118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.crt ...
	I1027 23:08:15.114194 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.crt: {Name:mk8261c70d04e916f451c560528b8afe9c02b78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.114435 1275118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.key ...
	I1027 23:08:15.114455 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.key: {Name:mkc8828fc8a0d0784e13182a0b4de4717eadd1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.114580 1275118 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd
	I1027 23:08:15.114603 1275118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 23:08:15.695249 1275118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd ...
	I1027 23:08:15.695282 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd: {Name:mke373ab2e1f4acaa3135981d58c28ea4d8e3b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.695494 1275118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd ...
	I1027 23:08:15.695514 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd: {Name:mke4440f9724ae7fa7ab9e7eab1a7dbd6e626d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.695616 1275118 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt
	I1027 23:08:15.695699 1275118 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key
	I1027 23:08:15.695762 1275118 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key
	I1027 23:08:15.695781 1275118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt with IP's: []
	I1027 23:08:16.250487 1275118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt ...
	I1027 23:08:16.250522 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt: {Name:mk6c00abb5d8b28ed9ef9df62c5ce825dd869448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:16.250700 1275118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key ...
	I1027 23:08:16.250717 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key: {Name:mk06915f6bcdfd833f0e43133e245017081ca4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:16.250800 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1027 23:08:16.250825 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1027 23:08:16.250839 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1027 23:08:16.250858 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1027 23:08:16.250872 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1027 23:08:16.250889 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1027 23:08:16.250901 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1027 23:08:16.250918 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1027 23:08:16.250973 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:08:16.251012 1275118 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:08:16.251024 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:08:16.251050 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:08:16.251077 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:08:16.251102 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:08:16.251150 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:16.251182 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.251199 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem -> /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.251211 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.251738 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:08:16.281573 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:08:16.313813 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:08:16.348841 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:08:16.383772 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1027 23:08:16.404612 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:08:16.428180 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:08:16.451803 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:08:16.483059 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:08:16.511597 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:08:16.542609 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:08:16.578482 1275118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:08:16.616640 1275118 ssh_runner.go:195] Run: openssl version
	I1027 23:08:16.624581 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:08:16.640331 1275118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.644354 1275118 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.644417 1275118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.697462 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:08:16.714843 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:08:16.723094 1275118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.730233 1275118 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.730346 1275118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.780938 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:08:16.791561 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:08:16.803585 1275118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.807691 1275118 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.807805 1275118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.858192 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:08:16.870664 1275118 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:08:16.878182 1275118 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:08:16.878281 1275118 kubeadm.go:401] StartCluster: {Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:08:16.878413 1275118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:08:16.878510 1275118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:08:16.914374 1275118 cri.go:89] found id: ""
	I1027 23:08:16.914514 1275118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:08:16.924758 1275118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:08:16.947434 1275118 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 23:08:16.947576 1275118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:08:16.963158 1275118 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:08:16.963243 1275118 kubeadm.go:158] found existing configuration files:
	
	I1027 23:08:16.963331 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:08:16.984044 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:08:16.984165 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:08:17.015818 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:08:17.035706 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:08:17.035822 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:08:17.055538 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:08:17.065968 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:08:17.066082 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:08:17.076330 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:08:17.092556 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:08:17.092698 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:08:17.100655 1275118 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 23:08:17.166793 1275118 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:08:17.167003 1275118 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:08:17.218682 1275118 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 23:08:17.218769 1275118 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 23:08:17.218812 1275118 kubeadm.go:319] OS: Linux
	I1027 23:08:17.218876 1275118 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 23:08:17.218938 1275118 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 23:08:17.218998 1275118 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 23:08:17.219059 1275118 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 23:08:17.219114 1275118 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 23:08:17.219174 1275118 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 23:08:17.219233 1275118 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 23:08:17.219295 1275118 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 23:08:17.219355 1275118 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 23:08:17.350538 1275118 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:08:17.350663 1275118 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:08:17.350775 1275118 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:08:17.362785 1275118 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:08:17.370375 1275118 out.go:252]   - Generating certificates and keys ...
	I1027 23:08:17.370527 1275118 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:08:17.370611 1275118 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:08:18.002109 1275118 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:08:18.094623 1275118 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:08:17.115769 1274679 node_ready.go:49] node "pause-180608" is "Ready"
	I1027 23:08:17.115796 1274679 node_ready.go:38] duration metric: took 9.890795108s for node "pause-180608" to be "Ready" ...
	I1027 23:08:17.115810 1274679 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:08:17.115867 1274679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:08:17.149679 1274679 api_server.go:72] duration metric: took 10.076474175s to wait for apiserver process to appear ...
	I1027 23:08:17.149718 1274679 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:08:17.149738 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:17.349127 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 23:08:17.349204 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 23:08:17.650703 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:17.765146 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:08:17.765184 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:08:18.150837 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:18.172410 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:08:18.172444 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:08:18.649898 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:18.674078 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:08:18.674162 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:08:19.150319 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:19.169602 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 23:08:19.171524 1274679 api_server.go:141] control plane version: v1.34.1
	I1027 23:08:19.171592 1274679 api_server.go:131] duration metric: took 2.021865281s to wait for apiserver health ...
	I1027 23:08:19.171615 1274679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:08:19.176246 1274679 system_pods.go:59] 7 kube-system pods found
	I1027 23:08:19.176338 1274679 system_pods.go:61] "coredns-66bc5c9577-jpzmv" [b6d46c56-4560-41fa-8260-aa53ca712c2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:08:19.176368 1274679 system_pods.go:61] "etcd-pause-180608" [1aa86d51-ae56-4f18-8bd8-31af60173abb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:08:19.176387 1274679 system_pods.go:61] "kindnet-pslcl" [1b2adb05-3d0c-4584-bc81-63f0cc6613ea] Running
	I1027 23:08:19.176422 1274679 system_pods.go:61] "kube-apiserver-pause-180608" [bd296105-222a-4a81-820b-3ea0f7d3b789] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:08:19.176449 1274679 system_pods.go:61] "kube-controller-manager-pause-180608" [ab99b59b-334a-46d7-a97f-d0d6f3391519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:08:19.176468 1274679 system_pods.go:61] "kube-proxy-22xkc" [c797f2db-9e8c-4853-a30f-9e3104917115] Running
	I1027 23:08:19.176506 1274679 system_pods.go:61] "kube-scheduler-pause-180608" [862456a9-d065-4378-a5c8-fa4d9f086880] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:08:19.176532 1274679 system_pods.go:74] duration metric: took 4.897571ms to wait for pod list to return data ...
	I1027 23:08:19.176554 1274679 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:08:19.179287 1274679 default_sa.go:45] found service account: "default"
	I1027 23:08:19.179339 1274679 default_sa.go:55] duration metric: took 2.751239ms for default service account to be created ...
	I1027 23:08:19.179375 1274679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:08:19.182192 1274679 system_pods.go:86] 7 kube-system pods found
	I1027 23:08:19.182261 1274679 system_pods.go:89] "coredns-66bc5c9577-jpzmv" [b6d46c56-4560-41fa-8260-aa53ca712c2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:08:19.182286 1274679 system_pods.go:89] "etcd-pause-180608" [1aa86d51-ae56-4f18-8bd8-31af60173abb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:08:19.182327 1274679 system_pods.go:89] "kindnet-pslcl" [1b2adb05-3d0c-4584-bc81-63f0cc6613ea] Running
	I1027 23:08:19.182355 1274679 system_pods.go:89] "kube-apiserver-pause-180608" [bd296105-222a-4a81-820b-3ea0f7d3b789] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:08:19.182406 1274679 system_pods.go:89] "kube-controller-manager-pause-180608" [ab99b59b-334a-46d7-a97f-d0d6f3391519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:08:19.182431 1274679 system_pods.go:89] "kube-proxy-22xkc" [c797f2db-9e8c-4853-a30f-9e3104917115] Running
	I1027 23:08:19.182457 1274679 system_pods.go:89] "kube-scheduler-pause-180608" [862456a9-d065-4378-a5c8-fa4d9f086880] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:08:19.182492 1274679 system_pods.go:126] duration metric: took 3.092537ms to wait for k8s-apps to be running ...
	I1027 23:08:19.182521 1274679 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:08:19.182604 1274679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:08:19.198592 1274679 system_svc.go:56] duration metric: took 16.063989ms WaitForService to wait for kubelet
	I1027 23:08:19.198670 1274679 kubeadm.go:587] duration metric: took 12.125470164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:08:19.198726 1274679 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:08:19.201861 1274679 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:08:19.201939 1274679 node_conditions.go:123] node cpu capacity is 2
	I1027 23:08:19.201968 1274679 node_conditions.go:105] duration metric: took 3.22355ms to run NodePressure ...
	I1027 23:08:19.201993 1274679 start.go:242] waiting for startup goroutines ...
	I1027 23:08:19.202033 1274679 start.go:247] waiting for cluster config update ...
	I1027 23:08:19.202057 1274679 start.go:256] writing updated cluster config ...
	I1027 23:08:19.202471 1274679 ssh_runner.go:195] Run: rm -f paused
	I1027 23:08:19.206437 1274679 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:08:19.207063 1274679 kapi.go:59] client config for pause-180608: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key", CAFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21204e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 23:08:19.212893 1274679 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jpzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:18.443408 1275118 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:08:18.953096 1275118 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:08:19.233890 1275118 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:08:19.234496 1275118 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-179399 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:08:19.794287 1275118 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:08:19.794479 1275118 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-179399 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:08:20.209613 1275118 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:08:21.189612 1275118 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:08:22.060058 1275118 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:08:22.060376 1275118 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:08:22.240824 1275118 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:08:23.130237 1275118 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:08:23.189344 1275118 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:08:23.445089 1275118 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:08:23.772693 1275118 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:08:23.773339 1275118 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:08:23.776011 1275118 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 23:08:21.224870 1274679 pod_ready.go:104] pod "coredns-66bc5c9577-jpzmv" is not "Ready", error: <nil>
	I1027 23:08:22.219106 1274679 pod_ready.go:94] pod "coredns-66bc5c9577-jpzmv" is "Ready"
	I1027 23:08:22.219146 1274679 pod_ready.go:86] duration metric: took 3.006181251s for pod "coredns-66bc5c9577-jpzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.222755 1274679 pod_ready.go:83] waiting for pod "etcd-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.231318 1274679 pod_ready.go:94] pod "etcd-pause-180608" is "Ready"
	I1027 23:08:22.231346 1274679 pod_ready.go:86] duration metric: took 8.565279ms for pod "etcd-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.234252 1274679 pod_ready.go:83] waiting for pod "kube-apiserver-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.240011 1274679 pod_ready.go:94] pod "kube-apiserver-pause-180608" is "Ready"
	I1027 23:08:22.240042 1274679 pod_ready.go:86] duration metric: took 5.762052ms for pod "kube-apiserver-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.243481 1274679 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:08:24.251216 1274679 pod_ready.go:104] pod "kube-controller-manager-pause-180608" is not "Ready", error: <nil>
	I1027 23:08:23.779306 1275118 out.go:252]   - Booting up control plane ...
	I1027 23:08:23.779414 1275118 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:08:23.779502 1275118 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:08:23.779577 1275118 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:08:23.797090 1275118 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:08:23.797211 1275118 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:08:23.804690 1275118 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:08:23.805081 1275118 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:08:23.805328 1275118 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:08:23.942864 1275118 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:08:23.942989 1275118 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:08:24.940605 1275118 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00162096s
	I1027 23:08:24.944353 1275118 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:08:24.944459 1275118 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 23:08:24.944561 1275118 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:08:24.944648 1275118 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1027 23:08:26.261097 1274679 pod_ready.go:104] pod "kube-controller-manager-pause-180608" is not "Ready", error: <nil>
	I1027 23:08:28.249125 1274679 pod_ready.go:94] pod "kube-controller-manager-pause-180608" is "Ready"
	I1027 23:08:28.249163 1274679 pod_ready.go:86] duration metric: took 6.005657026s for pod "kube-controller-manager-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.255249 1274679 pod_ready.go:83] waiting for pod "kube-proxy-22xkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.263283 1274679 pod_ready.go:94] pod "kube-proxy-22xkc" is "Ready"
	I1027 23:08:28.263320 1274679 pod_ready.go:86] duration metric: took 8.043957ms for pod "kube-proxy-22xkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.265680 1274679 pod_ready.go:83] waiting for pod "kube-scheduler-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.416604 1274679 pod_ready.go:94] pod "kube-scheduler-pause-180608" is "Ready"
	I1027 23:08:28.416633 1274679 pod_ready.go:86] duration metric: took 150.92893ms for pod "kube-scheduler-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.416646 1274679 pod_ready.go:40] duration metric: took 9.210132913s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:08:28.526416 1274679 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:08:28.529757 1274679 out.go:179] * Done! kubectl is now configured to use "pause-180608" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.893979411Z" level=info msg="Creating container: kube-system/kube-scheduler-pause-180608/kube-scheduler" id=782d36b2-a333-472c-8042-c45e7a687af9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.894091125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.904680921Z" level=info msg="Created container eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e: kube-system/kube-controller-manager-pause-180608/kube-controller-manager" id=2dc01812-75bf-430c-9daf-ee7f83e21ffd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.905583581Z" level=info msg="Starting container: eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e" id=5398ddfc-d6e4-48c8-b05c-7a08b44c7392 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.90685899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.907712788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.91483912Z" level=info msg="Started container" PID=2409 containerID=eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e description=kube-system/kube-controller-manager-pause-180608/kube-controller-manager id=5398ddfc-d6e4-48c8-b05c-7a08b44c7392 name=/runtime.v1.RuntimeService/StartContainer sandboxID=363081f1b345418ed1a5e44ad25c594298bddc6f0b12e48e33d34fb2559d39ac
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.944336643Z" level=info msg="Created container 2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359: kube-system/kube-scheduler-pause-180608/kube-scheduler" id=782d36b2-a333-472c-8042-c45e7a687af9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.945263812Z" level=info msg="Starting container: 2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359" id=94aaa80d-1687-40dd-a246-bef818ddf7d3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.947795602Z" level=info msg="Started container" PID=2433 containerID=2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359 description=kube-system/kube-scheduler-pause-180608/kube-scheduler id=94aaa80d-1687-40dd-a246-bef818ddf7d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7eb7741f886fbcf5e97b16855b5489675624390c0a97841983567144ac941a5f
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.980203497Z" level=info msg="Created container 53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e: kube-system/kube-apiserver-pause-180608/kube-apiserver" id=4cac04f8-4be4-42e2-b7ae-e4787edaec69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.981182548Z" level=info msg="Starting container: 53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e" id=1a749594-ec09-40e1-8d5c-b3481cc816cc name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.984136925Z" level=info msg="Started container" PID=2422 containerID=53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e description=kube-system/kube-apiserver-pause-180608/kube-apiserver id=1a749594-ec09-40e1-8d5c-b3481cc816cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=944719700d14b455216c5b20b5ba8ad455eafdde2ba9690b8bbaa754c1394839
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.123919064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.127982217Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.128025951Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.128049656Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.136954116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.137118385Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.137195194Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.140763628Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.140939639Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.141015283Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.144315881Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.14447672Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2e1bc6d366adf       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   7eb7741f886fb       kube-scheduler-pause-180608            kube-system
	53247afb6c26d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   944719700d14b       kube-apiserver-pause-180608            kube-system
	eac1eaa2581f3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   363081f1b3454       kube-controller-manager-pause-180608   kube-system
	021da40950a29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   44e1b47786dd9       etcd-pause-180608                      kube-system
	7c741dedb9b95       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   352c7e1a5b63c       coredns-66bc5c9577-jpzmv               kube-system
	90838204b928c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   91a7d8322f597       kindnet-pslcl                          kube-system
	893e096fab004       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   c55e63abd8b37       kube-proxy-22xkc                       kube-system
	64d490196d16b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago       Exited              coredns                   0                   352c7e1a5b63c       coredns-66bc5c9577-jpzmv               kube-system
	1852461627d88       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c55e63abd8b37       kube-proxy-22xkc                       kube-system
	2b428d4b7e6fb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   91a7d8322f597       kindnet-pslcl                          kube-system
	8e2099955fee8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   363081f1b3454       kube-controller-manager-pause-180608   kube-system
	11948704eefc0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   7eb7741f886fb       kube-scheduler-pause-180608            kube-system
	190b5dd451533       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   44e1b47786dd9       etcd-pause-180608                      kube-system
	ccf3881ff1ed4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   944719700d14b       kube-apiserver-pause-180608            kube-system
	
	
	==> coredns [64d490196d16ba5e9e067647e6c057744f2984df8bb471f59101d483eb228168] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50128 - 40765 "HINFO IN 770041778125185702.7584871950647285198. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027977131s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7c741dedb9b95b51a18a73a8bae03bfd6e03223aee5c148db0fb790cd53ee265] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45307 - 43781 "HINFO IN 4769005089381244768.5847984695354447938. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042252086s
	
	
	==> describe nodes <==
	Name:               pause-180608
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-180608
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=pause-180608
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_07_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:07:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-180608
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:08:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:06:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:06:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:06:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:07:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-180608
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b34479aa-efa2-484b-aa2e-cbed6f6b0ba2
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jpzmv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     81s
	  kube-system                 etcd-pause-180608                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         86s
	  kube-system                 kindnet-pslcl                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      81s
	  kube-system                 kube-apiserver-pause-180608             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-180608    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-22xkc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-pause-180608             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 80s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Warning  CgroupV1                 97s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  97s (x8 over 97s)  kubelet          Node pause-180608 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s (x8 over 97s)  kubelet          Node pause-180608 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s (x8 over 97s)  kubelet          Node pause-180608 status is now: NodeHasSufficientPID
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  86s                kubelet          Node pause-180608 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s                kubelet          Node pause-180608 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s                kubelet          Node pause-180608 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s                node-controller  Node pause-180608 event: Registered Node pause-180608 in Controller
	  Normal   NodeReady                40s                kubelet          Node pause-180608 status is now: NodeReady
	  Warning  ContainerGCFailed        26s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           12s                node-controller  Node pause-180608 event: Registered Node pause-180608 in Controller
	
	
	==> dmesg <==
	[  +3.174012] overlayfs: idmapped layers are currently not supported
	[ +37.061621] overlayfs: idmapped layers are currently not supported
	[Oct27 22:44] overlayfs: idmapped layers are currently not supported
	[Oct27 22:45] overlayfs: idmapped layers are currently not supported
	[  +4.255944] overlayfs: idmapped layers are currently not supported
	[Oct27 22:46] overlayfs: idmapped layers are currently not supported
	[Oct27 22:47] overlayfs: idmapped layers are currently not supported
	[Oct27 22:48] overlayfs: idmapped layers are currently not supported
	[Oct27 22:53] overlayfs: idmapped layers are currently not supported
	[Oct27 22:54] overlayfs: idmapped layers are currently not supported
	[Oct27 22:55] overlayfs: idmapped layers are currently not supported
	[Oct27 22:56] overlayfs: idmapped layers are currently not supported
	[Oct27 22:57] overlayfs: idmapped layers are currently not supported
	[Oct27 22:59] overlayfs: idmapped layers are currently not supported
	[ +25.315146] overlayfs: idmapped layers are currently not supported
	[  +1.719322] overlayfs: idmapped layers are currently not supported
	[Oct27 23:00] overlayfs: idmapped layers are currently not supported
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [021da40950a294110e4541f9cb8799f59a838a0c2abc0af7436a6bebd4c0e8cd] <==
	{"level":"warn","ts":"2025-10-27T23:08:13.659695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.686210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.715405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.767999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.796285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.863400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.880094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.892780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.909581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.958190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.010589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.131709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.174421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.243748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.294949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.338607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.373971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.411289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.462564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.502036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.586538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.602654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.663716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.701356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.913413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34090","server-name":"","error":"EOF"}
	
	
	==> etcd [190b5dd4515332ce06bf30b75f07111cc7134d2b22bc385fb9a47744a7ced680] <==
	{"level":"warn","ts":"2025-10-27T23:07:01.154605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.174928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.201145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.262749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.272869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.281090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.373302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T23:07:57.828022Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T23:07:57.828072Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-180608","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-27T23:07:57.828155Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T23:07:58.111433Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T23:07:58.112902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T23:07:58.112968Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-27T23:07:58.113052Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T23:07:58.113063Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113359Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113373Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T23:07:58.113380Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113291Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113410Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T23:07:58.113417Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T23:07:58.116356Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-27T23:07:58.116423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T23:07:58.116450Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T23:07:58.116457Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-180608","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 23:08:32 up  5:51,  0 user,  load average: 5.62, 3.00, 2.27
	Linux pause-180608 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b428d4b7e6fbf4f947b835d957fda754922104d7bf53f17c3783574eafa08d7] <==
	I1027 23:07:11.856738       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:07:11.857148       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:07:11.857307       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:07:11.857351       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:07:11.857388       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:07:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:07:12.035924       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:07:12.036029       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:07:12.036065       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:07:12.036284       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:07:42.036471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:07:42.036593       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:07:42.037895       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:07:42.117140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1027 23:07:43.636306       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:07:43.636403       1 metrics.go:72] Registering metrics
	I1027 23:07:43.636515       1 controller.go:711] "Syncing nftables rules"
	I1027 23:07:52.035368       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:07:52.035497       1 main.go:301] handling current node
	
	
	==> kindnet [90838204b928c48a4dbbbe5ce5299e995c32585a66accba00603e5262d6cbb97] <==
	I1027 23:08:09.833612       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:08:09.836861       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:08:09.836997       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:08:09.837009       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:08:09.837024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:08:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:08:10.122936       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:08:10.123021       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:08:10.123058       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:08:10.123492       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:08:10.123057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:08:10.123132       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:08:10.123622       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:08:10.123687       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 23:08:17.723481       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:08:17.723617       1 metrics.go:72] Registering metrics
	I1027 23:08:17.723719       1 controller.go:711] "Syncing nftables rules"
	I1027 23:08:20.123474       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:08:20.123592       1 main.go:301] handling current node
	I1027 23:08:30.122532       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:08:30.122606       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e] <==
	I1027 23:08:17.493318       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:08:17.510649       1 aggregator.go:171] initial CRD sync complete...
	I1027 23:08:17.510767       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 23:08:17.510818       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:08:17.511598       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 23:08:17.515128       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:08:17.515830       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:08:17.515945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:08:17.516439       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 23:08:17.538630       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:08:17.539244       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 23:08:17.539344       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:08:17.590855       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:08:17.592726       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:08:17.618021       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:08:17.622820       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:08:17.699607       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 23:08:17.707830       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 23:08:17.766077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1027 23:08:17.807805       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:08:19.462657       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:08:20.928108       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:08:21.075573       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:08:21.125210       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:08:21.235053       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [ccf3881ff1ed45bc8d78cb82b817e75eea09bf871e82ef8b5245f5a2cf9233f2] <==
	W1027 23:07:57.846330       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846437       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846486       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846556       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846614       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849173       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849236       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849276       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849318       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849359       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849400       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.853868       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854111       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854197       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854272       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854599       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.855245       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856823       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856872       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856907       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856945       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856985       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.857022       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.857220       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.857423       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [8e2099955fee832bae84d5ff137f8359811066bc9c95e88db65fd0ae081d7627] <==
	I1027 23:07:10.138587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 23:07:10.138598       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:07:10.138611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:07:10.138619       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 23:07:10.138545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 23:07:10.138536       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 23:07:10.138580       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:07:10.144473       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 23:07:10.150537       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:07:10.151168       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:07:10.152093       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:07:10.157354       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 23:07:10.162492       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:07:10.162598       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 23:07:10.178599       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:07:10.194810       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:07:10.214820       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-180608" podCIDRs=["10.244.0.0/24"]
	I1027 23:07:10.218042       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:07:10.237696       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:07:10.238136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:07:10.287654       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:07:10.287742       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:07:10.287774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:07:10.308935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:07:55.148810       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e] <==
	I1027 23:08:20.870437       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:08:20.870665       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:08:20.870738       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:08:20.870690       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 23:08:20.870823       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:08:20.870677       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 23:08:20.870700       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:08:20.873045       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 23:08:20.873172       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:08:20.883666       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:08:20.883766       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:08:20.883797       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:08:20.887581       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:08:20.889902       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:08:20.899535       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 23:08:20.903901       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 23:08:20.912242       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:08:20.916731       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:08:20.917744       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 23:08:20.917935       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:08:20.918063       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:08:20.918108       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 23:08:20.925001       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 23:08:20.925096       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:08:20.932523       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-proxy [1852461627d88419e9ec506bd983019b2d829ddf9c13e1acb0e9a1afeaa96a41] <==
	I1027 23:07:12.140715       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:07:12.228104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:07:12.328882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:07:12.328997       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:07:12.329132       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:07:12.364342       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:07:12.364473       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:07:12.368589       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:07:12.368988       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:07:12.369178       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:07:12.370543       1 config.go:200] "Starting service config controller"
	I1027 23:07:12.370608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:07:12.370653       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:07:12.370681       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:07:12.370736       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:07:12.370761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:07:12.371402       1 config.go:309] "Starting node config controller"
	I1027 23:07:12.373771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:07:12.373841       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:07:12.470914       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:07:12.471010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:07:12.473665       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [893e096fab0047978d7befba788f303c50255093c6b08e3b673897a4a72cf757] <==
	I1027 23:08:09.783438       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:08:10.955419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:08:17.752110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:08:17.752226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:08:17.752337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:08:18.947869       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:08:18.947985       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:08:19.020894       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:08:19.021174       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:08:19.021198       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:08:19.029846       1 config.go:200] "Starting service config controller"
	I1027 23:08:19.029882       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:08:19.029899       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:08:19.029909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:08:19.029923       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:08:19.029929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:08:19.030562       1 config.go:309] "Starting node config controller"
	I1027 23:08:19.030580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:08:19.030586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:08:19.131218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:08:19.145156       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:08:19.154525       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1] <==
	E1027 23:07:02.878156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 23:07:02.878193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:07:02.878240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:07:02.878281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 23:07:02.878330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:07:02.879495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:07:02.879555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 23:07:02.879613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 23:07:02.895843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:07:03.719987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:07:03.768607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:07:03.822697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:07:03.836465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:07:03.836596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:07:03.937395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:07:04.035508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 23:07:04.035615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 23:07:04.044011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1027 23:07:07.026793       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:07:57.829579       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 23:07:57.829689       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 23:07:57.829701       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 23:07:57.829719       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:07:57.829900       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 23:07:57.829914       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359] <==
	I1027 23:08:13.597355       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:08:19.098838       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:08:19.098934       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:08:19.111923       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:08:19.115772       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:08:19.126818       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:08:19.115787       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:08:19.126952       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:08:19.115800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:08:19.115731       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:08:19.130581       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:08:19.227418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:08:19.227464       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:08:19.230658       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716034    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0f78cf5ad0bd28587872deda44de4e77" pod="kube-system/kube-apiserver-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716315    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0861646f3e1faf01baf275c91d815b55" pod="kube-system/kube-controller-manager-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716549    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pslcl\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1b2adb05-3d0c-4584-bc81-63f0cc6613ea" pod="kube-system/kindnet-pslcl"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716704    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22xkc\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c797f2db-9e8c-4853-a30f-9e3104917115" pod="kube-system/kube-proxy-22xkc"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716873    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jpzmv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b6d46c56-4560-41fa-8260-aa53ca712c2a" pod="kube-system/coredns-66bc5c9577-jpzmv"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: I1027 23:08:09.847777    1315 scope.go:117] "RemoveContainer" containerID="11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.848353    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d9c7ae05ec6d0c6bf62a82dca6c585" pod="kube-system/etcd-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.848552    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0f78cf5ad0bd28587872deda44de4e77" pod="kube-system/kube-apiserver-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.848727    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0861646f3e1faf01baf275c91d815b55" pod="kube-system/kube-controller-manager-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.852701    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pslcl\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1b2adb05-3d0c-4584-bc81-63f0cc6613ea" pod="kube-system/kindnet-pslcl"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.853001    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22xkc\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c797f2db-9e8c-4853-a30f-9e3104917115" pod="kube-system/kube-proxy-22xkc"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.853162    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jpzmv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b6d46c56-4560-41fa-8260-aa53ca712c2a" pod="kube-system/coredns-66bc5c9577-jpzmv"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.853299    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="12820d4d7830dca4b90efffe49493306" pod="kube-system/kube-scheduler-pause-180608"
	Oct 27 23:08:10 pause-180608 kubelet[1315]: E1027 23:08:10.058913    1315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-180608?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="3.2s"
	Oct 27 23:08:16 pause-180608 kubelet[1315]: E1027 23:08:16.660198    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-180608\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="0f78cf5ad0bd28587872deda44de4e77" pod="kube-system/kube-apiserver-pause-180608"
	Oct 27 23:08:16 pause-180608 kubelet[1315]: E1027 23:08:16.661364    1315 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-180608\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 27 23:08:16 pause-180608 kubelet[1315]: E1027 23:08:16.934877    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-180608\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="0861646f3e1faf01baf275c91d815b55" pod="kube-system/kube-controller-manager-pause-180608"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.116632    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-pslcl\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="1b2adb05-3d0c-4584-bc81-63f0cc6613ea" pod="kube-system/kindnet-pslcl"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.355901    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-22xkc\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="c797f2db-9e8c-4853-a30f-9e3104917115" pod="kube-system/kube-proxy-22xkc"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.440386    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-jpzmv\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="b6d46c56-4560-41fa-8260-aa53ca712c2a" pod="kube-system/coredns-66bc5c9577-jpzmv"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.503895    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-180608\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="12820d4d7830dca4b90efffe49493306" pod="kube-system/kube-scheduler-pause-180608"
	Oct 27 23:08:26 pause-180608 kubelet[1315]: W1027 23:08:26.585198    1315 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 27 23:08:29 pause-180608 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:08:29 pause-180608 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:08:29 pause-180608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-180608 -n pause-180608
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-180608 -n pause-180608: exit status 2 (388.940547ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-180608 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-180608
helpers_test.go:243: (dbg) docker inspect pause-180608:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee",
	        "Created": "2025-10-27T23:06:30.896490938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1265475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:06:30.993227097Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/hostname",
	        "HostsPath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/hosts",
	        "LogPath": "/var/lib/docker/containers/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee/5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee-json.log",
	        "Name": "/pause-180608",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-180608:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-180608",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5efefd6988c38e832c9c43f319ad43f5b6069cc47cff45c0895bcd60f18e9fee",
	                "LowerDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98b68858b82ed5749f9ce02f72af5d1d73d864ca5c7c401657a0bfb3497ba884/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-180608",
	                "Source": "/var/lib/docker/volumes/pause-180608/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-180608",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-180608",
	                "name.minikube.sigs.k8s.io": "pause-180608",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b4c1962cdb455d35aa42f4d6268d85c18ce64bfaeda7756e54df47ea8e96bbe6",
	            "SandboxKey": "/var/run/docker/netns/b4c1962cdb45",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34452"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-180608": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:cb:23:25:68:2e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e57e66724fdc9a76ce4a5e6d71596915361b300178f7a0743fab0d1d0bf19ab8",
	                    "EndpointID": "6ff078f3c7d6e42420fb1106a39e32900146765df7825684a280c17b54e33407",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-180608",
	                        "5efefd6988c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-180608 -n pause-180608
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-180608 -n pause-180608: exit status 2 (431.129918ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-180608 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-180608 logs -n 25: (1.973773648s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-440075 sudo systemctl cat kubelet --no-pager                                                     │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status docker --all --full --no-pager                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat docker --no-pager                                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/docker/daemon.json                                                          │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo docker system info                                                                   │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cri-dockerd --version                                                                │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat containerd --no-pager                                                  │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo cat /etc/containerd/config.toml                                                      │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo containerd config dump                                                               │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl status crio --all --full --no-pager                                        │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo systemctl cat crio --no-pager                                                        │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ ssh     │ -p cilium-440075 sudo crio config                                                                          │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ delete  │ -p cilium-440075                                                                                           │ cilium-440075            │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │ 27 Oct 25 23:07 UTC │
	│ start   │ -p force-systemd-env-179399 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-179399 │ jenkins │ v1.37.0 │ 27 Oct 25 23:07 UTC │                     │
	│ pause   │ -p pause-180608 --alsologtostderr -v=5                                                                     │ pause-180608             │ jenkins │ v1.37.0 │ 27 Oct 25 23:08 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:07:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:07:58.382003 1275118 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:07:58.382449 1275118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:07:58.382488 1275118 out.go:374] Setting ErrFile to fd 2...
	I1027 23:07:58.382508 1275118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:07:58.382796 1275118 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:07:58.383276 1275118 out.go:368] Setting JSON to false
	I1027 23:07:58.384198 1275118 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21028,"bootTime":1761585451,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:07:58.384302 1275118 start.go:143] virtualization:  
	I1027 23:07:58.387739 1275118 out.go:179] * [force-systemd-env-179399] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:07:58.391995 1275118 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:07:58.392097 1275118 notify.go:221] Checking for updates...
	I1027 23:07:58.397820 1275118 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:07:58.400730 1275118 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:07:58.403708 1275118 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:07:58.406654 1275118 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:07:58.409560 1275118 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1027 23:07:58.413108 1275118 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:07:58.413232 1275118 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:07:58.443885 1275118 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:07:58.444015 1275118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:07:58.520694 1275118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:07:58.51150282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:07:58.520813 1275118 docker.go:318] overlay module found
	I1027 23:07:58.525877 1275118 out.go:179] * Using the docker driver based on user configuration
	I1027 23:07:58.528766 1275118 start.go:307] selected driver: docker
	I1027 23:07:58.528790 1275118 start.go:928] validating driver "docker" against <nil>
	I1027 23:07:58.528821 1275118 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:07:58.529708 1275118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:07:58.581660 1275118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:07:58.572044469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:07:58.581827 1275118 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 23:07:58.582089 1275118 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 23:07:58.584935 1275118 out.go:179] * Using Docker driver with root privileges
	I1027 23:07:58.587714 1275118 cni.go:84] Creating CNI manager for ""
	I1027 23:07:58.587780 1275118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:07:58.587794 1275118 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 23:07:58.587870 1275118 start.go:351] cluster config:
	{Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:07:58.591004 1275118 out.go:179] * Starting "force-systemd-env-179399" primary control-plane node in "force-systemd-env-179399" cluster
	I1027 23:07:58.593763 1275118 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:07:58.596639 1275118 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:07:58.599437 1275118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:07:58.599493 1275118 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:07:58.599506 1275118 cache.go:59] Caching tarball of preloaded images
	I1027 23:07:58.599520 1275118 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:07:58.599587 1275118 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:07:58.599597 1275118 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:07:58.599710 1275118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/config.json ...
	I1027 23:07:58.599732 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/config.json: {Name:mk83428b2aa61453697f46bac5df6e9ebab70e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:07:58.618476 1275118 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:07:58.618499 1275118 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:07:58.618519 1275118 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:07:58.618543 1275118 start.go:360] acquireMachinesLock for force-systemd-env-179399: {Name:mkb2557f6b9cf7bc1dd1a195fbe38189a74b4ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:07:58.618657 1275118 start.go:364] duration metric: took 92.843µs to acquireMachinesLock for "force-systemd-env-179399"
	I1027 23:07:58.618693 1275118 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:07:58.618764 1275118 start.go:125] createHost starting for "" (driver="docker")
	I1027 23:07:55.930948 1274679 out.go:252] * Updating the running docker "pause-180608" container ...
	I1027 23:07:55.930981 1274679 machine.go:94] provisionDockerMachine start ...
	I1027 23:07:55.931060 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:55.956401 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:55.956722 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:55.956737 1274679 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:07:56.118093 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-180608
	
	I1027 23:07:56.118121 1274679 ubuntu.go:182] provisioning hostname "pause-180608"
	I1027 23:07:56.118194 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:56.161213 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:56.161516 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:56.161527 1274679 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-180608 && echo "pause-180608" | sudo tee /etc/hostname
	I1027 23:07:56.339361 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-180608
	
	I1027 23:07:56.339432 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:56.366811 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:56.367100 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:56.367115 1274679 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-180608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-180608/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-180608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:07:56.538490 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:07:56.538517 1274679 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:07:56.538549 1274679 ubuntu.go:190] setting up certificates
	I1027 23:07:56.538559 1274679 provision.go:84] configureAuth start
	I1027 23:07:56.538629 1274679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-180608
	I1027 23:07:56.562998 1274679 provision.go:143] copyHostCerts
	I1027 23:07:56.563065 1274679 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:07:56.563087 1274679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:07:56.563168 1274679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:07:56.563267 1274679 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:07:56.563279 1274679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:07:56.563306 1274679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:07:56.563403 1274679 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:07:56.563413 1274679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:07:56.563438 1274679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:07:56.563492 1274679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.pause-180608 san=[127.0.0.1 192.168.76.2 localhost minikube pause-180608]
	I1027 23:07:57.401131 1274679 provision.go:177] copyRemoteCerts
	I1027 23:07:57.401251 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:07:57.401300 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:57.420121 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:07:57.540396 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 23:07:57.562539 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:07:57.584892 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:07:57.605900 1274679 provision.go:87] duration metric: took 1.067319329s to configureAuth
	I1027 23:07:57.605928 1274679 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:07:57.606164 1274679 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:07:57.606262 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:07:57.636038 1274679 main.go:143] libmachine: Using SSH client type: native
	I1027 23:07:57.636344 1274679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1027 23:07:57.636359 1274679 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:07:58.622047 1275118 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 23:07:58.622293 1275118 start.go:159] libmachine.API.Create for "force-systemd-env-179399" (driver="docker")
	I1027 23:07:58.622340 1275118 client.go:173] LocalClient.Create starting
	I1027 23:07:58.622439 1275118 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem
	I1027 23:07:58.622482 1275118 main.go:143] libmachine: Decoding PEM data...
	I1027 23:07:58.622504 1275118 main.go:143] libmachine: Parsing certificate...
	I1027 23:07:58.622570 1275118 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem
	I1027 23:07:58.622594 1275118 main.go:143] libmachine: Decoding PEM data...
	I1027 23:07:58.622604 1275118 main.go:143] libmachine: Parsing certificate...
	I1027 23:07:58.623006 1275118 cli_runner.go:164] Run: docker network inspect force-systemd-env-179399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 23:07:58.640447 1275118 cli_runner.go:211] docker network inspect force-systemd-env-179399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 23:07:58.640550 1275118 network_create.go:284] running [docker network inspect force-systemd-env-179399] to gather additional debugging logs...
	I1027 23:07:58.640568 1275118 cli_runner.go:164] Run: docker network inspect force-systemd-env-179399
	W1027 23:07:58.657276 1275118 cli_runner.go:211] docker network inspect force-systemd-env-179399 returned with exit code 1
	I1027 23:07:58.657304 1275118 network_create.go:287] error running [docker network inspect force-systemd-env-179399]: docker network inspect force-systemd-env-179399: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-179399 not found
	I1027 23:07:58.657335 1275118 network_create.go:289] output of [docker network inspect force-systemd-env-179399]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-179399 not found
	
	** /stderr **
	I1027 23:07:58.657434 1275118 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:07:58.674338 1275118 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bec5bade6d32 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:b8:32:37:d1:1a} reservation:<nil>}
	I1027 23:07:58.674655 1275118 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0dc359f1a23c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:03:b5:bc:b2:ab} reservation:<nil>}
	I1027 23:07:58.674963 1275118 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6865072e7c41 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:f3:83:1f:14:0e} reservation:<nil>}
	I1027 23:07:58.675282 1275118 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e57e66724fdc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:bd:b1:42:6d:9f} reservation:<nil>}
	I1027 23:07:58.675687 1275118 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3f0c0}
	I1027 23:07:58.675716 1275118 network_create.go:124] attempt to create docker network force-systemd-env-179399 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 23:07:58.675773 1275118 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-179399 force-systemd-env-179399
	I1027 23:07:58.736029 1275118 network_create.go:108] docker network force-systemd-env-179399 192.168.85.0/24 created
	I1027 23:07:58.736061 1275118 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-179399" container
	I1027 23:07:58.736146 1275118 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 23:07:58.752904 1275118 cli_runner.go:164] Run: docker volume create force-systemd-env-179399 --label name.minikube.sigs.k8s.io=force-systemd-env-179399 --label created_by.minikube.sigs.k8s.io=true
	I1027 23:07:58.772427 1275118 oci.go:103] Successfully created a docker volume force-systemd-env-179399
	I1027 23:07:58.772524 1275118 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-179399-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-179399 --entrypoint /usr/bin/test -v force-systemd-env-179399:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 23:07:59.289240 1275118 oci.go:107] Successfully prepared a docker volume force-systemd-env-179399
	I1027 23:07:59.289289 1275118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:07:59.289309 1275118 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 23:07:59.289388 1275118 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-179399:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 23:08:03.256030 1274679 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:08:03.256052 1274679 machine.go:97] duration metric: took 7.325062268s to provisionDockerMachine
	I1027 23:08:03.256063 1274679 start.go:293] postStartSetup for "pause-180608" (driver="docker")
	I1027 23:08:03.256074 1274679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:08:03.256136 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:08:03.256195 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.274809 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.378961 1274679 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:08:03.382707 1274679 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:08:03.382786 1274679 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:08:03.382812 1274679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:08:03.382886 1274679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:08:03.382979 1274679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:08:03.383083 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:08:03.390637 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:03.409786 1274679 start.go:296] duration metric: took 153.707383ms for postStartSetup
	I1027 23:08:03.409887 1274679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:08:03.409947 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.428706 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.532006 1274679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:08:03.537316 1274679 fix.go:57] duration metric: took 7.639402094s for fixHost
	I1027 23:08:03.537343 1274679 start.go:83] releasing machines lock for "pause-180608", held for 7.639455764s
	I1027 23:08:03.537411 1274679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-180608
	I1027 23:08:03.554616 1274679 ssh_runner.go:195] Run: cat /version.json
	I1027 23:08:03.554674 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.554736 1274679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:08:03.554810 1274679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-180608
	I1027 23:08:03.575301 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.578446 1274679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/pause-180608/id_rsa Username:docker}
	I1027 23:08:03.768933 1274679 ssh_runner.go:195] Run: systemctl --version
	I1027 23:08:03.775656 1274679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:08:03.826355 1274679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:08:03.832274 1274679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:08:03.832396 1274679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:08:03.840748 1274679 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:08:03.840780 1274679 start.go:496] detecting cgroup driver to use...
	I1027 23:08:03.840833 1274679 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:08:03.840897 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:08:03.856977 1274679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:08:03.871439 1274679 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:08:03.871524 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:08:03.889001 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:08:03.902958 1274679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:08:04.048484 1274679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:08:04.237985 1274679 docker.go:234] disabling docker service ...
	I1027 23:08:04.238064 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:08:04.256902 1274679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:08:04.281270 1274679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:08:04.480505 1274679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:08:04.724739 1274679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:08:04.758043 1274679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:08:04.794677 1274679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:08:04.794747 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.811267 1274679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:08:04.811354 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.833069 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.859024 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.873141 1274679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:08:04.886017 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.896833 1274679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.917898 1274679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:04.930227 1274679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:08:04.948342 1274679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:08:04.965233 1274679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:05.332543 1274679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:08:05.713719 1274679 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:08:05.713786 1274679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:08:05.717769 1274679 start.go:564] Will wait 60s for crictl version
	I1027 23:08:05.717844 1274679 ssh_runner.go:195] Run: which crictl
	I1027 23:08:05.722972 1274679 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:08:05.762700 1274679 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:08:05.762787 1274679 ssh_runner.go:195] Run: crio --version
	I1027 23:08:05.808161 1274679 ssh_runner.go:195] Run: crio --version
	I1027 23:08:05.854047 1274679 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:08:05.857230 1274679 cli_runner.go:164] Run: docker network inspect pause-180608 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:08:05.881635 1274679 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 23:08:05.886449 1274679 kubeadm.go:884] updating cluster {Name:pause-180608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-180608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:08:05.886592 1274679 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:08:05.886645 1274679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:05.940329 1274679 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:05.940351 1274679 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:08:05.940409 1274679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:05.988404 1274679 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:05.988486 1274679 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:08:05.988509 1274679 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 23:08:05.988641 1274679 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-180608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-180608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:08:05.988796 1274679 ssh_runner.go:195] Run: crio config
	I1027 23:08:06.057416 1274679 cni.go:84] Creating CNI manager for ""
	I1027 23:08:06.057493 1274679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:08:06.057526 1274679 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:08:06.057581 1274679 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-180608 NodeName:pause-180608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:08:06.057763 1274679 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-180608"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:08:06.057882 1274679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:08:06.069271 1274679 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:08:06.069345 1274679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:08:06.080447 1274679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 23:08:06.098633 1274679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:08:06.116956 1274679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 23:08:06.134552 1274679 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:08:06.139628 1274679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:06.306416 1274679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:08:06.319676 1274679 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608 for IP: 192.168.76.2
	I1027 23:08:06.319759 1274679 certs.go:195] generating shared ca certs ...
	I1027 23:08:06.319797 1274679 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:06.319972 1274679 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:08:06.320042 1274679 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:08:06.320066 1274679 certs.go:257] generating profile certs ...
	I1027 23:08:06.320176 1274679 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key
	I1027 23:08:06.320289 1274679 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/apiserver.key.8063c8c5
	I1027 23:08:06.320372 1274679 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/proxy-client.key
	I1027 23:08:06.320502 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:08:06.320568 1274679 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:08:06.320603 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:08:06.320658 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:08:06.320722 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:08:06.320766 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:08:06.320849 1274679 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:06.321538 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:08:06.340898 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:08:06.358354 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:08:06.376336 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:08:06.394040 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 23:08:06.419502 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:08:06.437614 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:08:06.456046 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:08:06.475815 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:08:06.494485 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:08:06.513196 1274679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:08:06.531179 1274679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:08:06.546802 1274679 ssh_runner.go:195] Run: openssl version
	I1027 23:08:06.553249 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:08:06.562027 1274679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:08:06.565925 1274679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:08:06.566014 1274679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:08:06.613738 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:08:06.622605 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:08:06.632100 1274679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:08:06.636070 1274679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:08:06.636180 1274679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:08:06.677370 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:08:06.685420 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:08:06.693628 1274679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:06.697466 1274679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:06.697563 1274679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:06.740560 1274679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:08:06.748942 1274679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:08:06.752895 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:08:06.793791 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:08:06.835055 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:08:06.876514 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:08:06.917911 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:08:06.959314 1274679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:08:07.000791 1274679 kubeadm.go:401] StartCluster: {Name:pause-180608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-180608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:08:07.000926 1274679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:08:07.000999 1274679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:08:07.034058 1274679 cri.go:89] found id: "64d490196d16ba5e9e067647e6c057744f2984df8bb471f59101d483eb228168"
	I1027 23:08:07.034080 1274679 cri.go:89] found id: "1852461627d88419e9ec506bd983019b2d829ddf9c13e1acb0e9a1afeaa96a41"
	I1027 23:08:07.034085 1274679 cri.go:89] found id: "2b428d4b7e6fbf4f947b835d957fda754922104d7bf53f17c3783574eafa08d7"
	I1027 23:08:07.034089 1274679 cri.go:89] found id: "8e2099955fee832bae84d5ff137f8359811066bc9c95e88db65fd0ae081d7627"
	I1027 23:08:07.034093 1274679 cri.go:89] found id: "11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1"
	I1027 23:08:07.034096 1274679 cri.go:89] found id: "190b5dd4515332ce06bf30b75f07111cc7134d2b22bc385fb9a47744a7ced680"
	I1027 23:08:07.034098 1274679 cri.go:89] found id: "ccf3881ff1ed45bc8d78cb82b817e75eea09bf871e82ef8b5245f5a2cf9233f2"
	I1027 23:08:07.034101 1274679 cri.go:89] found id: ""
	I1027 23:08:07.034169 1274679 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:08:07.045499 1274679 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:08:07Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:08:07.045581 1274679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:08:07.053825 1274679 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:08:07.053847 1274679 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:08:07.053926 1274679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:08:07.062123 1274679 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:08:07.062808 1274679 kubeconfig.go:125] found "pause-180608" server: "https://192.168.76.2:8443"
	I1027 23:08:07.063383 1274679 kapi.go:59] client config for pause-180608: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key", CAFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21204e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 23:08:07.063874 1274679 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 23:08:07.063893 1274679 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 23:08:07.063899 1274679 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 23:08:07.063907 1274679 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 23:08:07.063912 1274679 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 23:08:07.064170 1274679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:08:07.072177 1274679 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 23:08:07.072210 1274679 kubeadm.go:602] duration metric: took 18.357098ms to restartPrimaryControlPlane
	I1027 23:08:07.072220 1274679 kubeadm.go:403] duration metric: took 71.454804ms to StartCluster
	I1027 23:08:07.072234 1274679 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:07.072313 1274679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:08:07.072939 1274679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:07.073167 1274679 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:08:07.073506 1274679 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:08:07.073556 1274679 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:08:07.078630 1274679 out.go:179] * Verifying Kubernetes components...
	I1027 23:08:07.078702 1274679 out.go:179] * Enabled addons: 
	I1027 23:08:04.083053 1275118 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-179399:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.793620521s)
	I1027 23:08:04.083085 1275118 kic.go:203] duration metric: took 4.79377213s to extract preloaded images to volume ...
	W1027 23:08:04.083250 1275118 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 23:08:04.083359 1275118 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 23:08:04.192939 1275118 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-179399 --name force-systemd-env-179399 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-179399 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-179399 --network force-systemd-env-179399 --ip 192.168.85.2 --volume force-systemd-env-179399:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 23:08:04.577431 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Running}}
	I1027 23:08:04.599648 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Status}}
	I1027 23:08:04.625272 1275118 cli_runner.go:164] Run: docker exec force-systemd-env-179399 stat /var/lib/dpkg/alternatives/iptables
	I1027 23:08:04.689224 1275118 oci.go:144] the created container "force-systemd-env-179399" has a running status.
	I1027 23:08:04.689259 1275118 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa...
	I1027 23:08:05.434439 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1027 23:08:05.434484 1275118 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 23:08:05.461757 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Status}}
	I1027 23:08:05.486255 1275118 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 23:08:05.486275 1275118 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-179399 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 23:08:05.560840 1275118 cli_runner.go:164] Run: docker container inspect force-systemd-env-179399 --format={{.State.Status}}
	I1027 23:08:05.588047 1275118 machine.go:94] provisionDockerMachine start ...
	I1027 23:08:05.588148 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:05.615631 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:05.615970 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:05.615979 1275118 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:08:05.616751 1275118 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53666->127.0.0.1:34469: read: connection reset by peer
	I1027 23:08:07.082249 1274679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:07.082413 1274679 addons.go:514] duration metric: took 8.825526ms for enable addons: enabled=[]
	I1027 23:08:07.211285 1274679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:08:07.224959 1274679 node_ready.go:35] waiting up to 6m0s for node "pause-180608" to be "Ready" ...
	W1027 23:08:09.225548 1274679 node_ready.go:55] error getting node "pause-180608" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-180608": dial tcp 192.168.76.2:8443: connect: connection refused
	I1027 23:08:08.770059 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-179399
	
	I1027 23:08:08.770125 1275118 ubuntu.go:182] provisioning hostname "force-systemd-env-179399"
	I1027 23:08:08.770196 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:08.787603 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:08.787929 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:08.787947 1275118 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-179399 && echo "force-systemd-env-179399" | sudo tee /etc/hostname
	I1027 23:08:08.948331 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-179399
	
	I1027 23:08:08.948411 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:08.965915 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:08.966282 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:08.966307 1275118 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-179399' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-179399/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-179399' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:08:09.118885 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:08:09.118960 1275118 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:08:09.118998 1275118 ubuntu.go:190] setting up certificates
	I1027 23:08:09.119037 1275118 provision.go:84] configureAuth start
	I1027 23:08:09.119148 1275118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-179399
	I1027 23:08:09.136260 1275118 provision.go:143] copyHostCerts
	I1027 23:08:09.136307 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:08:09.136340 1275118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:08:09.136347 1275118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:08:09.136423 1275118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:08:09.136498 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:08:09.136514 1275118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:08:09.136518 1275118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:08:09.136542 1275118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:08:09.136579 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:08:09.136595 1275118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:08:09.136599 1275118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:08:09.136620 1275118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:08:09.136663 1275118 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-179399 san=[127.0.0.1 192.168.85.2 force-systemd-env-179399 localhost minikube]
	I1027 23:08:10.172774 1275118 provision.go:177] copyRemoteCerts
	I1027 23:08:10.172857 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:08:10.172907 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:10.196035 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:10.327690 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1027 23:08:10.327754 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:08:10.360125 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1027 23:08:10.360186 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1027 23:08:10.388988 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1027 23:08:10.389053 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 23:08:10.416149 1275118 provision.go:87] duration metric: took 1.297068472s to configureAuth
	I1027 23:08:10.416178 1275118 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:08:10.416348 1275118 config.go:182] Loaded profile config "force-systemd-env-179399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:08:10.416468 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:10.443858 1275118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:08:10.444178 1275118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1027 23:08:10.444199 1275118 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:08:10.827633 1275118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:08:10.827664 1275118 machine.go:97] duration metric: took 5.239597227s to provisionDockerMachine
	I1027 23:08:10.827674 1275118 client.go:176] duration metric: took 12.205322951s to LocalClient.Create
	I1027 23:08:10.827688 1275118 start.go:167] duration metric: took 12.205396897s to libmachine.API.Create "force-systemd-env-179399"
	I1027 23:08:10.827699 1275118 start.go:293] postStartSetup for "force-systemd-env-179399" (driver="docker")
	I1027 23:08:10.827710 1275118 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:08:10.827785 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:08:10.827830 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:10.863923 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:10.995539 1275118 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:08:10.999002 1275118 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:08:10.999028 1275118 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:08:10.999039 1275118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:08:10.999095 1275118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:08:10.999177 1275118 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:08:10.999183 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> /etc/ssl/certs/11347352.pem
	I1027 23:08:10.999305 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:08:11.007829 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:11.026710 1275118 start.go:296] duration metric: took 198.981068ms for postStartSetup
	I1027 23:08:11.027072 1275118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-179399
	I1027 23:08:11.045732 1275118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/config.json ...
	I1027 23:08:11.046005 1275118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:08:11.046068 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:11.071989 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:11.195005 1275118 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:08:11.203341 1275118 start.go:128] duration metric: took 12.584561389s to createHost
	I1027 23:08:11.203367 1275118 start.go:83] releasing machines lock for "force-systemd-env-179399", held for 12.584692968s
	I1027 23:08:11.203439 1275118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-179399
	I1027 23:08:11.231962 1275118 ssh_runner.go:195] Run: cat /version.json
	I1027 23:08:11.232015 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:11.232043 1275118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:08:11.232111 1275118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-179399
	I1027 23:08:11.259758 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:11.284421 1275118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/force-systemd-env-179399/id_rsa Username:docker}
	I1027 23:08:11.382015 1275118 ssh_runner.go:195] Run: systemctl --version
	I1027 23:08:11.521380 1275118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:08:11.601101 1275118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:08:11.607880 1275118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:08:11.607952 1275118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:08:11.649990 1275118 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 23:08:11.650012 1275118 start.go:496] detecting cgroup driver to use...
	I1027 23:08:11.650027 1275118 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1027 23:08:11.650081 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:08:11.669872 1275118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:08:11.684176 1275118 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:08:11.684289 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:08:11.702877 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:08:11.723469 1275118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:08:11.908003 1275118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:08:12.125202 1275118 docker.go:234] disabling docker service ...
	I1027 23:08:12.125277 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:08:12.155647 1275118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:08:12.178577 1275118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:08:12.404667 1275118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:08:12.644212 1275118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:08:12.667630 1275118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:08:12.687296 1275118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:08:12.687363 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.699896 1275118 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 23:08:12.699964 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.714206 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.729011 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.740894 1275118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:08:12.755870 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.768728 1275118 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.793153 1275118 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:08:12.806243 1275118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:08:12.817651 1275118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:08:12.829997 1275118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:13.030565 1275118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:08:13.231275 1275118 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:08:13.231393 1275118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:08:13.237114 1275118 start.go:564] Will wait 60s for crictl version
	I1027 23:08:13.237234 1275118 ssh_runner.go:195] Run: which crictl
	I1027 23:08:13.241503 1275118 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:08:13.279148 1275118 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:08:13.279308 1275118 ssh_runner.go:195] Run: crio --version
	I1027 23:08:13.330817 1275118 ssh_runner.go:195] Run: crio --version
	I1027 23:08:13.381905 1275118 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:08:13.384940 1275118 cli_runner.go:164] Run: docker network inspect force-systemd-env-179399 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:08:13.406589 1275118 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:08:13.411131 1275118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:08:13.426183 1275118 kubeadm.go:884] updating cluster {Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:08:13.426293 1275118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:08:13.426353 1275118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:13.492979 1275118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:13.493004 1275118 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:08:13.493062 1275118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:08:13.544526 1275118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:08:13.544546 1275118 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:08:13.544554 1275118 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:08:13.544657 1275118 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-179399 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:08:13.544735 1275118 ssh_runner.go:195] Run: crio config
	I1027 23:08:13.631193 1275118 cni.go:84] Creating CNI manager for ""
	I1027 23:08:13.631215 1275118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:08:13.631228 1275118 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:08:13.631251 1275118 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-179399 NodeName:force-systemd-env-179399 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:08:13.631397 1275118 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-179399"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:08:13.631479 1275118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:08:13.641258 1275118 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:08:13.641344 1275118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:08:13.650739 1275118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1027 23:08:13.679606 1275118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:08:13.708410 1275118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1027 23:08:13.728178 1275118 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:08:13.732028 1275118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:08:13.747021 1275118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:08:13.935389 1275118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:08:13.971633 1275118 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399 for IP: 192.168.85.2
	I1027 23:08:13.971655 1275118 certs.go:195] generating shared ca certs ...
	I1027 23:08:13.971671 1275118 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:13.971803 1275118 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:08:13.971858 1275118 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:08:13.971867 1275118 certs.go:257] generating profile certs ...
	I1027 23:08:13.971930 1275118 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.key
	I1027 23:08:13.971945 1275118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.crt with IP's: []
	I1027 23:08:15.114154 1275118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.crt ...
	I1027 23:08:15.114194 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.crt: {Name:mk8261c70d04e916f451c560528b8afe9c02b78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.114435 1275118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.key ...
	I1027 23:08:15.114455 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/client.key: {Name:mkc8828fc8a0d0784e13182a0b4de4717eadd1c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.114580 1275118 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd
	I1027 23:08:15.114603 1275118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 23:08:15.695249 1275118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd ...
	I1027 23:08:15.695282 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd: {Name:mke373ab2e1f4acaa3135981d58c28ea4d8e3b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.695494 1275118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd ...
	I1027 23:08:15.695514 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd: {Name:mke4440f9724ae7fa7ab9e7eab1a7dbd6e626d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:15.695616 1275118 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt.705069fd -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt
	I1027 23:08:15.695699 1275118 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key.705069fd -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key
	I1027 23:08:15.695762 1275118 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key
	I1027 23:08:15.695781 1275118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt with IP's: []
	I1027 23:08:16.250487 1275118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt ...
	I1027 23:08:16.250522 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt: {Name:mk6c00abb5d8b28ed9ef9df62c5ce825dd869448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:16.250700 1275118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key ...
	I1027 23:08:16.250717 1275118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key: {Name:mk06915f6bcdfd833f0e43133e245017081ca4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:08:16.250800 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1027 23:08:16.250825 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1027 23:08:16.250839 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1027 23:08:16.250858 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1027 23:08:16.250872 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1027 23:08:16.250889 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1027 23:08:16.250901 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1027 23:08:16.250918 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1027 23:08:16.250973 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:08:16.251012 1275118 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:08:16.251024 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:08:16.251050 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:08:16.251077 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:08:16.251102 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:08:16.251150 1275118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:08:16.251182 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.251199 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem -> /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.251211 1275118 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.251738 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:08:16.281573 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:08:16.313813 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:08:16.348841 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:08:16.383772 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1027 23:08:16.404612 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:08:16.428180 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:08:16.451803 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/force-systemd-env-179399/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:08:16.483059 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:08:16.511597 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:08:16.542609 1275118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:08:16.578482 1275118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:08:16.616640 1275118 ssh_runner.go:195] Run: openssl version
	I1027 23:08:16.624581 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:08:16.640331 1275118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.644354 1275118 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.644417 1275118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:08:16.697462 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:08:16.714843 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:08:16.723094 1275118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.730233 1275118 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.730346 1275118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:08:16.780938 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:08:16.791561 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:08:16.803585 1275118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.807691 1275118 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.807805 1275118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:08:16.858192 1275118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:08:16.870664 1275118 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:08:16.878182 1275118 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:08:16.878281 1275118 kubeadm.go:401] StartCluster: {Name:force-systemd-env-179399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-179399 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:08:16.878413 1275118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:08:16.878510 1275118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:08:16.914374 1275118 cri.go:89] found id: ""
	I1027 23:08:16.914514 1275118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:08:16.924758 1275118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:08:16.947434 1275118 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 23:08:16.947576 1275118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:08:16.963158 1275118 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:08:16.963243 1275118 kubeadm.go:158] found existing configuration files:
	
	I1027 23:08:16.963331 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:08:16.984044 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:08:16.984165 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:08:17.015818 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:08:17.035706 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:08:17.035822 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:08:17.055538 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:08:17.065968 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:08:17.066082 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:08:17.076330 1275118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:08:17.092556 1275118 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:08:17.092698 1275118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:08:17.100655 1275118 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 23:08:17.166793 1275118 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:08:17.167003 1275118 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:08:17.218682 1275118 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 23:08:17.218769 1275118 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 23:08:17.218812 1275118 kubeadm.go:319] OS: Linux
	I1027 23:08:17.218876 1275118 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 23:08:17.218938 1275118 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 23:08:17.218998 1275118 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 23:08:17.219059 1275118 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 23:08:17.219114 1275118 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 23:08:17.219174 1275118 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 23:08:17.219233 1275118 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 23:08:17.219295 1275118 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 23:08:17.219355 1275118 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 23:08:17.350538 1275118 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:08:17.350663 1275118 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:08:17.350775 1275118 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:08:17.362785 1275118 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:08:17.370375 1275118 out.go:252]   - Generating certificates and keys ...
	I1027 23:08:17.370527 1275118 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:08:17.370611 1275118 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:08:18.002109 1275118 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:08:18.094623 1275118 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:08:17.115769 1274679 node_ready.go:49] node "pause-180608" is "Ready"
	I1027 23:08:17.115796 1274679 node_ready.go:38] duration metric: took 9.890795108s for node "pause-180608" to be "Ready" ...
	I1027 23:08:17.115810 1274679 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:08:17.115867 1274679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:08:17.149679 1274679 api_server.go:72] duration metric: took 10.076474175s to wait for apiserver process to appear ...
	I1027 23:08:17.149718 1274679 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:08:17.149738 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:17.349127 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 23:08:17.349204 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 23:08:17.650703 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:17.765146 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:08:17.765184 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:08:18.150837 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:18.172410 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:08:18.172444 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:08:18.649898 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:18.674078 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:08:18.674162 1274679 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:08:19.150319 1274679 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:08:19.169602 1274679 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 23:08:19.171524 1274679 api_server.go:141] control plane version: v1.34.1
	I1027 23:08:19.171592 1274679 api_server.go:131] duration metric: took 2.021865281s to wait for apiserver health ...
	I1027 23:08:19.171615 1274679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:08:19.176246 1274679 system_pods.go:59] 7 kube-system pods found
	I1027 23:08:19.176338 1274679 system_pods.go:61] "coredns-66bc5c9577-jpzmv" [b6d46c56-4560-41fa-8260-aa53ca712c2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:08:19.176368 1274679 system_pods.go:61] "etcd-pause-180608" [1aa86d51-ae56-4f18-8bd8-31af60173abb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:08:19.176387 1274679 system_pods.go:61] "kindnet-pslcl" [1b2adb05-3d0c-4584-bc81-63f0cc6613ea] Running
	I1027 23:08:19.176422 1274679 system_pods.go:61] "kube-apiserver-pause-180608" [bd296105-222a-4a81-820b-3ea0f7d3b789] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:08:19.176449 1274679 system_pods.go:61] "kube-controller-manager-pause-180608" [ab99b59b-334a-46d7-a97f-d0d6f3391519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:08:19.176468 1274679 system_pods.go:61] "kube-proxy-22xkc" [c797f2db-9e8c-4853-a30f-9e3104917115] Running
	I1027 23:08:19.176506 1274679 system_pods.go:61] "kube-scheduler-pause-180608" [862456a9-d065-4378-a5c8-fa4d9f086880] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:08:19.176532 1274679 system_pods.go:74] duration metric: took 4.897571ms to wait for pod list to return data ...
	I1027 23:08:19.176554 1274679 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:08:19.179287 1274679 default_sa.go:45] found service account: "default"
	I1027 23:08:19.179339 1274679 default_sa.go:55] duration metric: took 2.751239ms for default service account to be created ...
	I1027 23:08:19.179375 1274679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:08:19.182192 1274679 system_pods.go:86] 7 kube-system pods found
	I1027 23:08:19.182261 1274679 system_pods.go:89] "coredns-66bc5c9577-jpzmv" [b6d46c56-4560-41fa-8260-aa53ca712c2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:08:19.182286 1274679 system_pods.go:89] "etcd-pause-180608" [1aa86d51-ae56-4f18-8bd8-31af60173abb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:08:19.182327 1274679 system_pods.go:89] "kindnet-pslcl" [1b2adb05-3d0c-4584-bc81-63f0cc6613ea] Running
	I1027 23:08:19.182355 1274679 system_pods.go:89] "kube-apiserver-pause-180608" [bd296105-222a-4a81-820b-3ea0f7d3b789] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:08:19.182406 1274679 system_pods.go:89] "kube-controller-manager-pause-180608" [ab99b59b-334a-46d7-a97f-d0d6f3391519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:08:19.182431 1274679 system_pods.go:89] "kube-proxy-22xkc" [c797f2db-9e8c-4853-a30f-9e3104917115] Running
	I1027 23:08:19.182457 1274679 system_pods.go:89] "kube-scheduler-pause-180608" [862456a9-d065-4378-a5c8-fa4d9f086880] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:08:19.182492 1274679 system_pods.go:126] duration metric: took 3.092537ms to wait for k8s-apps to be running ...
	I1027 23:08:19.182521 1274679 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:08:19.182604 1274679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:08:19.198592 1274679 system_svc.go:56] duration metric: took 16.063989ms WaitForService to wait for kubelet
	I1027 23:08:19.198670 1274679 kubeadm.go:587] duration metric: took 12.125470164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:08:19.198726 1274679 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:08:19.201861 1274679 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:08:19.201939 1274679 node_conditions.go:123] node cpu capacity is 2
	I1027 23:08:19.201968 1274679 node_conditions.go:105] duration metric: took 3.22355ms to run NodePressure ...
	I1027 23:08:19.201993 1274679 start.go:242] waiting for startup goroutines ...
	I1027 23:08:19.202033 1274679 start.go:247] waiting for cluster config update ...
	I1027 23:08:19.202057 1274679 start.go:256] writing updated cluster config ...
	I1027 23:08:19.202471 1274679 ssh_runner.go:195] Run: rm -f paused
	I1027 23:08:19.206437 1274679 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:08:19.207063 1274679 kapi.go:59] client config for pause-180608: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.crt", KeyFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key", CAFile:"/home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21204e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 23:08:19.212893 1274679 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jpzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:18.443408 1275118 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:08:18.953096 1275118 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:08:19.233890 1275118 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:08:19.234496 1275118 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-179399 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:08:19.794287 1275118 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:08:19.794479 1275118 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-179399 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:08:20.209613 1275118 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:08:21.189612 1275118 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:08:22.060058 1275118 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:08:22.060376 1275118 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:08:22.240824 1275118 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:08:23.130237 1275118 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:08:23.189344 1275118 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:08:23.445089 1275118 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:08:23.772693 1275118 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:08:23.773339 1275118 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:08:23.776011 1275118 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 23:08:21.224870 1274679 pod_ready.go:104] pod "coredns-66bc5c9577-jpzmv" is not "Ready", error: <nil>
	I1027 23:08:22.219106 1274679 pod_ready.go:94] pod "coredns-66bc5c9577-jpzmv" is "Ready"
	I1027 23:08:22.219146 1274679 pod_ready.go:86] duration metric: took 3.006181251s for pod "coredns-66bc5c9577-jpzmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.222755 1274679 pod_ready.go:83] waiting for pod "etcd-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.231318 1274679 pod_ready.go:94] pod "etcd-pause-180608" is "Ready"
	I1027 23:08:22.231346 1274679 pod_ready.go:86] duration metric: took 8.565279ms for pod "etcd-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.234252 1274679 pod_ready.go:83] waiting for pod "kube-apiserver-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.240011 1274679 pod_ready.go:94] pod "kube-apiserver-pause-180608" is "Ready"
	I1027 23:08:22.240042 1274679 pod_ready.go:86] duration metric: took 5.762052ms for pod "kube-apiserver-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:22.243481 1274679 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:08:24.251216 1274679 pod_ready.go:104] pod "kube-controller-manager-pause-180608" is not "Ready", error: <nil>
	I1027 23:08:23.779306 1275118 out.go:252]   - Booting up control plane ...
	I1027 23:08:23.779414 1275118 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:08:23.779502 1275118 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:08:23.779577 1275118 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:08:23.797090 1275118 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:08:23.797211 1275118 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:08:23.804690 1275118 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:08:23.805081 1275118 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:08:23.805328 1275118 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:08:23.942864 1275118 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:08:23.942989 1275118 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:08:24.940605 1275118 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00162096s
	I1027 23:08:24.944353 1275118 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:08:24.944459 1275118 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 23:08:24.944561 1275118 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:08:24.944648 1275118 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1027 23:08:26.261097 1274679 pod_ready.go:104] pod "kube-controller-manager-pause-180608" is not "Ready", error: <nil>
	I1027 23:08:28.249125 1274679 pod_ready.go:94] pod "kube-controller-manager-pause-180608" is "Ready"
	I1027 23:08:28.249163 1274679 pod_ready.go:86] duration metric: took 6.005657026s for pod "kube-controller-manager-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.255249 1274679 pod_ready.go:83] waiting for pod "kube-proxy-22xkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.263283 1274679 pod_ready.go:94] pod "kube-proxy-22xkc" is "Ready"
	I1027 23:08:28.263320 1274679 pod_ready.go:86] duration metric: took 8.043957ms for pod "kube-proxy-22xkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.265680 1274679 pod_ready.go:83] waiting for pod "kube-scheduler-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.416604 1274679 pod_ready.go:94] pod "kube-scheduler-pause-180608" is "Ready"
	I1027 23:08:28.416633 1274679 pod_ready.go:86] duration metric: took 150.92893ms for pod "kube-scheduler-pause-180608" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:08:28.416646 1274679 pod_ready.go:40] duration metric: took 9.210132913s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:08:28.526416 1274679 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:08:28.529757 1274679 out.go:179] * Done! kubectl is now configured to use "pause-180608" cluster and "default" namespace by default
	I1027 23:08:29.278575 1275118 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.333548138s
	I1027 23:08:31.116080 1275118 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.171717694s
	I1027 23:08:33.450957 1275118 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.504843067s
	I1027 23:08:33.474577 1275118 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:08:33.494098 1275118 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:08:33.515368 1275118 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:08:33.515584 1275118 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-env-179399 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:08:33.529362 1275118 kubeadm.go:319] [bootstrap-token] Using token: v0xaa2.7yxi0q0g6s1687v7
	
	
	==> CRI-O <==
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.893979411Z" level=info msg="Creating container: kube-system/kube-scheduler-pause-180608/kube-scheduler" id=782d36b2-a333-472c-8042-c45e7a687af9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.894091125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.904680921Z" level=info msg="Created container eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e: kube-system/kube-controller-manager-pause-180608/kube-controller-manager" id=2dc01812-75bf-430c-9daf-ee7f83e21ffd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.905583581Z" level=info msg="Starting container: eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e" id=5398ddfc-d6e4-48c8-b05c-7a08b44c7392 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.90685899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.907712788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.91483912Z" level=info msg="Started container" PID=2409 containerID=eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e description=kube-system/kube-controller-manager-pause-180608/kube-controller-manager id=5398ddfc-d6e4-48c8-b05c-7a08b44c7392 name=/runtime.v1.RuntimeService/StartContainer sandboxID=363081f1b345418ed1a5e44ad25c594298bddc6f0b12e48e33d34fb2559d39ac
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.944336643Z" level=info msg="Created container 2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359: kube-system/kube-scheduler-pause-180608/kube-scheduler" id=782d36b2-a333-472c-8042-c45e7a687af9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.945263812Z" level=info msg="Starting container: 2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359" id=94aaa80d-1687-40dd-a246-bef818ddf7d3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.947795602Z" level=info msg="Started container" PID=2433 containerID=2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359 description=kube-system/kube-scheduler-pause-180608/kube-scheduler id=94aaa80d-1687-40dd-a246-bef818ddf7d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7eb7741f886fbcf5e97b16855b5489675624390c0a97841983567144ac941a5f
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.980203497Z" level=info msg="Created container 53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e: kube-system/kube-apiserver-pause-180608/kube-apiserver" id=4cac04f8-4be4-42e2-b7ae-e4787edaec69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.981182548Z" level=info msg="Starting container: 53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e" id=1a749594-ec09-40e1-8d5c-b3481cc816cc name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:08:09 pause-180608 crio[2092]: time="2025-10-27T23:08:09.984136925Z" level=info msg="Started container" PID=2422 containerID=53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e description=kube-system/kube-apiserver-pause-180608/kube-apiserver id=1a749594-ec09-40e1-8d5c-b3481cc816cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=944719700d14b455216c5b20b5ba8ad455eafdde2ba9690b8bbaa754c1394839
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.123919064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.127982217Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.128025951Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.128049656Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.136954116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.137118385Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.137195194Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.140763628Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.140939639Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.141015283Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.144315881Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:08:20 pause-180608 crio[2092]: time="2025-10-27T23:08:20.14447672Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2e1bc6d366adf       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago       Running             kube-scheduler            1                   7eb7741f886fb       kube-scheduler-pause-180608            kube-system
	53247afb6c26d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago       Running             kube-apiserver            1                   944719700d14b       kube-apiserver-pause-180608            kube-system
	eac1eaa2581f3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago       Running             kube-controller-manager   1                   363081f1b3454       kube-controller-manager-pause-180608   kube-system
	021da40950a29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   44e1b47786dd9       etcd-pause-180608                      kube-system
	7c741dedb9b95       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   352c7e1a5b63c       coredns-66bc5c9577-jpzmv               kube-system
	90838204b928c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   91a7d8322f597       kindnet-pslcl                          kube-system
	893e096fab004       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   c55e63abd8b37       kube-proxy-22xkc                       kube-system
	64d490196d16b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   42 seconds ago       Exited              coredns                   0                   352c7e1a5b63c       coredns-66bc5c9577-jpzmv               kube-system
	1852461627d88       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c55e63abd8b37       kube-proxy-22xkc                       kube-system
	2b428d4b7e6fb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   91a7d8322f597       kindnet-pslcl                          kube-system
	8e2099955fee8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   363081f1b3454       kube-controller-manager-pause-180608   kube-system
	11948704eefc0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   7eb7741f886fb       kube-scheduler-pause-180608            kube-system
	190b5dd451533       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   44e1b47786dd9       etcd-pause-180608                      kube-system
	ccf3881ff1ed4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   944719700d14b       kube-apiserver-pause-180608            kube-system
	
	
	==> coredns [64d490196d16ba5e9e067647e6c057744f2984df8bb471f59101d483eb228168] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50128 - 40765 "HINFO IN 770041778125185702.7584871950647285198. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027977131s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7c741dedb9b95b51a18a73a8bae03bfd6e03223aee5c148db0fb790cd53ee265] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45307 - 43781 "HINFO IN 4769005089381244768.5847984695354447938. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042252086s
	
	
	==> describe nodes <==
	Name:               pause-180608
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-180608
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=pause-180608
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_07_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:07:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-180608
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:08:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:06:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:06:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:06:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:07:52 +0000   Mon, 27 Oct 2025 23:07:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-180608
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b34479aa-efa2-484b-aa2e-cbed6f6b0ba2
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jpzmv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     84s
	  kube-system                 etcd-pause-180608                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         89s
	  kube-system                 kindnet-pslcl                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      84s
	  kube-system                 kube-apiserver-pause-180608             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-pause-180608    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-22xkc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-pause-180608             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 82s                  kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Warning  CgroupV1                 100s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node pause-180608 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node pause-180608 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s (x8 over 100s)  kubelet          Node pause-180608 status is now: NodeHasSufficientPID
	  Normal   Starting                 90s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 90s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  89s                  kubelet          Node pause-180608 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s                  kubelet          Node pause-180608 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     89s                  kubelet          Node pause-180608 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           85s                  node-controller  Node pause-180608 event: Registered Node pause-180608 in Controller
	  Normal   NodeReady                43s                  kubelet          Node pause-180608 status is now: NodeReady
	  Warning  ContainerGCFailed        29s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           15s                  node-controller  Node pause-180608 event: Registered Node pause-180608 in Controller
	
	
	==> dmesg <==
	[  +3.174012] overlayfs: idmapped layers are currently not supported
	[ +37.061621] overlayfs: idmapped layers are currently not supported
	[Oct27 22:44] overlayfs: idmapped layers are currently not supported
	[Oct27 22:45] overlayfs: idmapped layers are currently not supported
	[  +4.255944] overlayfs: idmapped layers are currently not supported
	[Oct27 22:46] overlayfs: idmapped layers are currently not supported
	[Oct27 22:47] overlayfs: idmapped layers are currently not supported
	[Oct27 22:48] overlayfs: idmapped layers are currently not supported
	[Oct27 22:53] overlayfs: idmapped layers are currently not supported
	[Oct27 22:54] overlayfs: idmapped layers are currently not supported
	[Oct27 22:55] overlayfs: idmapped layers are currently not supported
	[Oct27 22:56] overlayfs: idmapped layers are currently not supported
	[Oct27 22:57] overlayfs: idmapped layers are currently not supported
	[Oct27 22:59] overlayfs: idmapped layers are currently not supported
	[ +25.315146] overlayfs: idmapped layers are currently not supported
	[  +1.719322] overlayfs: idmapped layers are currently not supported
	[Oct27 23:00] overlayfs: idmapped layers are currently not supported
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [021da40950a294110e4541f9cb8799f59a838a0c2abc0af7436a6bebd4c0e8cd] <==
	{"level":"warn","ts":"2025-10-27T23:08:13.659695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.686210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.715405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.767999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.796285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.863400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.880094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.892780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.909581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:13.958190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.010589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.131709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.174421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.243748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.294949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.338607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.373971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.411289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.462564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.502036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.586538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.602654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.663716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.701356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:08:14.913413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34090","server-name":"","error":"EOF"}
	
	
	==> etcd [190b5dd4515332ce06bf30b75f07111cc7134d2b22bc385fb9a47744a7ced680] <==
	{"level":"warn","ts":"2025-10-27T23:07:01.154605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.174928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.201145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.262749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.272869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.281090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:07:01.373302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T23:07:57.828022Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T23:07:57.828072Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-180608","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-27T23:07:57.828155Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T23:07:58.111433Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T23:07:58.112902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T23:07:58.112968Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-27T23:07:58.113052Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T23:07:58.113063Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113359Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113373Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T23:07:58.113380Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113291Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T23:07:58.113410Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T23:07:58.113417Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T23:07:58.116356Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-27T23:07:58.116423Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T23:07:58.116450Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T23:07:58.116457Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-180608","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 23:08:35 up  5:51,  0 user,  load average: 5.73, 3.07, 2.29
	Linux pause-180608 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b428d4b7e6fbf4f947b835d957fda754922104d7bf53f17c3783574eafa08d7] <==
	I1027 23:07:11.856738       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:07:11.857148       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:07:11.857307       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:07:11.857351       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:07:11.857388       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:07:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:07:12.035924       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:07:12.036029       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:07:12.036065       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:07:12.036284       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:07:42.036471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:07:42.036593       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:07:42.037895       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:07:42.117140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1027 23:07:43.636306       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:07:43.636403       1 metrics.go:72] Registering metrics
	I1027 23:07:43.636515       1 controller.go:711] "Syncing nftables rules"
	I1027 23:07:52.035368       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:07:52.035497       1 main.go:301] handling current node
	
	
	==> kindnet [90838204b928c48a4dbbbe5ce5299e995c32585a66accba00603e5262d6cbb97] <==
	I1027 23:08:09.833612       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:08:09.836861       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:08:09.836997       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:08:09.837009       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:08:09.837024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:08:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:08:10.122936       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:08:10.123021       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:08:10.123058       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:08:10.123492       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:08:10.123057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:08:10.123132       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:08:10.123622       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:08:10.123687       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 23:08:17.723481       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:08:17.723617       1 metrics.go:72] Registering metrics
	I1027 23:08:17.723719       1 controller.go:711] "Syncing nftables rules"
	I1027 23:08:20.123474       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:08:20.123592       1 main.go:301] handling current node
	I1027 23:08:30.122532       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:08:30.122606       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53247afb6c26daf50454350a834356b289462e93b7f913f3e55b3555d45b700e] <==
	I1027 23:08:17.493318       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:08:17.510649       1 aggregator.go:171] initial CRD sync complete...
	I1027 23:08:17.510767       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 23:08:17.510818       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:08:17.511598       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 23:08:17.515128       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:08:17.515830       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:08:17.515945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:08:17.516439       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 23:08:17.538630       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:08:17.539244       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 23:08:17.539344       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:08:17.590855       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:08:17.592726       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:08:17.618021       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:08:17.622820       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:08:17.699607       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 23:08:17.707830       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 23:08:17.766077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1027 23:08:17.807805       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:08:19.462657       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:08:20.928108       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:08:21.075573       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:08:21.125210       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:08:21.235053       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [ccf3881ff1ed45bc8d78cb82b817e75eea09bf871e82ef8b5245f5a2cf9233f2] <==
	W1027 23:07:57.846330       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846437       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846486       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846556       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.846614       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849173       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849236       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849276       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849318       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849359       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.849400       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.853868       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854111       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854197       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854272       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.854599       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.855245       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856823       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856872       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856907       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856945       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.856985       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.857022       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.857220       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 23:07:57.857423       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [8e2099955fee832bae84d5ff137f8359811066bc9c95e88db65fd0ae081d7627] <==
	I1027 23:07:10.138587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 23:07:10.138598       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:07:10.138611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:07:10.138619       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 23:07:10.138545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 23:07:10.138536       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 23:07:10.138580       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:07:10.144473       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 23:07:10.150537       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:07:10.151168       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:07:10.152093       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:07:10.157354       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 23:07:10.162492       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:07:10.162598       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 23:07:10.178599       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:07:10.194810       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:07:10.214820       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-180608" podCIDRs=["10.244.0.0/24"]
	I1027 23:07:10.218042       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:07:10.237696       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:07:10.238136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:07:10.287654       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:07:10.287742       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:07:10.287774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:07:10.308935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:07:55.148810       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [eac1eaa2581f322bde6c2d4ae935a6d2cb15370a30afec7a7667ae3a06ab0a7e] <==
	I1027 23:08:20.870437       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:08:20.870665       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:08:20.870738       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:08:20.870690       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 23:08:20.870823       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:08:20.870677       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 23:08:20.870700       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:08:20.873045       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 23:08:20.873172       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:08:20.883666       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:08:20.883766       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:08:20.883797       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:08:20.887581       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:08:20.889902       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:08:20.899535       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 23:08:20.903901       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 23:08:20.912242       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:08:20.916731       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:08:20.917744       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 23:08:20.917935       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:08:20.918063       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:08:20.918108       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 23:08:20.925001       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 23:08:20.925096       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:08:20.932523       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-proxy [1852461627d88419e9ec506bd983019b2d829ddf9c13e1acb0e9a1afeaa96a41] <==
	I1027 23:07:12.140715       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:07:12.228104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:07:12.328882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:07:12.328997       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:07:12.329132       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:07:12.364342       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:07:12.364473       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:07:12.368589       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:07:12.368988       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:07:12.369178       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:07:12.370543       1 config.go:200] "Starting service config controller"
	I1027 23:07:12.370608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:07:12.370653       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:07:12.370681       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:07:12.370736       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:07:12.370761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:07:12.371402       1 config.go:309] "Starting node config controller"
	I1027 23:07:12.373771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:07:12.373841       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:07:12.470914       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:07:12.471010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:07:12.473665       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [893e096fab0047978d7befba788f303c50255093c6b08e3b673897a4a72cf757] <==
	I1027 23:08:09.783438       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:08:10.955419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:08:17.752110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:08:17.752226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:08:17.752337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:08:18.947869       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:08:18.947985       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:08:19.020894       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:08:19.021174       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:08:19.021198       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:08:19.029846       1 config.go:200] "Starting service config controller"
	I1027 23:08:19.029882       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:08:19.029899       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:08:19.029909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:08:19.029923       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:08:19.029929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:08:19.030562       1 config.go:309] "Starting node config controller"
	I1027 23:08:19.030580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:08:19.030586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:08:19.131218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:08:19.145156       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:08:19.154525       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1] <==
	E1027 23:07:02.878156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 23:07:02.878193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:07:02.878240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:07:02.878281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 23:07:02.878330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:07:02.879495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:07:02.879555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 23:07:02.879613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 23:07:02.895843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:07:03.719987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:07:03.768607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:07:03.822697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:07:03.836465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:07:03.836596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:07:03.937395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:07:04.035508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 23:07:04.035615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 23:07:04.044011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1027 23:07:07.026793       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:07:57.829579       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 23:07:57.829689       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 23:07:57.829701       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 23:07:57.829719       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:07:57.829900       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 23:07:57.829914       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2e1bc6d366adf84302b7bcd049e7f88bcb3a9cfa520eb44ba543635e1f6ab359] <==
	I1027 23:08:13.597355       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:08:19.098838       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:08:19.098934       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:08:19.111923       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:08:19.115772       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:08:19.126818       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:08:19.115787       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:08:19.126952       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:08:19.115800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:08:19.115731       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:08:19.130581       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:08:19.227418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:08:19.227464       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:08:19.230658       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716034    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0f78cf5ad0bd28587872deda44de4e77" pod="kube-system/kube-apiserver-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716315    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0861646f3e1faf01baf275c91d815b55" pod="kube-system/kube-controller-manager-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716549    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pslcl\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1b2adb05-3d0c-4584-bc81-63f0cc6613ea" pod="kube-system/kindnet-pslcl"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716704    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22xkc\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c797f2db-9e8c-4853-a30f-9e3104917115" pod="kube-system/kube-proxy-22xkc"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.716873    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jpzmv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b6d46c56-4560-41fa-8260-aa53ca712c2a" pod="kube-system/coredns-66bc5c9577-jpzmv"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: I1027 23:08:09.847777    1315 scope.go:117] "RemoveContainer" containerID="11948704eefc0fd263f8fad40340db77a8d0431f866be69fc274a1e120cedcb1"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.848353    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d9c7ae05ec6d0c6bf62a82dca6c585" pod="kube-system/etcd-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.848552    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0f78cf5ad0bd28587872deda44de4e77" pod="kube-system/kube-apiserver-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.848727    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0861646f3e1faf01baf275c91d815b55" pod="kube-system/kube-controller-manager-pause-180608"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.852701    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pslcl\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1b2adb05-3d0c-4584-bc81-63f0cc6613ea" pod="kube-system/kindnet-pslcl"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.853001    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22xkc\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c797f2db-9e8c-4853-a30f-9e3104917115" pod="kube-system/kube-proxy-22xkc"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.853162    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jpzmv\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b6d46c56-4560-41fa-8260-aa53ca712c2a" pod="kube-system/coredns-66bc5c9577-jpzmv"
	Oct 27 23:08:09 pause-180608 kubelet[1315]: E1027 23:08:09.853299    1315 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-180608\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="12820d4d7830dca4b90efffe49493306" pod="kube-system/kube-scheduler-pause-180608"
	Oct 27 23:08:10 pause-180608 kubelet[1315]: E1027 23:08:10.058913    1315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-180608?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="3.2s"
	Oct 27 23:08:16 pause-180608 kubelet[1315]: E1027 23:08:16.660198    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-180608\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="0f78cf5ad0bd28587872deda44de4e77" pod="kube-system/kube-apiserver-pause-180608"
	Oct 27 23:08:16 pause-180608 kubelet[1315]: E1027 23:08:16.661364    1315 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-180608\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 27 23:08:16 pause-180608 kubelet[1315]: E1027 23:08:16.934877    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-180608\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="0861646f3e1faf01baf275c91d815b55" pod="kube-system/kube-controller-manager-pause-180608"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.116632    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-pslcl\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="1b2adb05-3d0c-4584-bc81-63f0cc6613ea" pod="kube-system/kindnet-pslcl"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.355901    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-22xkc\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="c797f2db-9e8c-4853-a30f-9e3104917115" pod="kube-system/kube-proxy-22xkc"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.440386    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-jpzmv\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="b6d46c56-4560-41fa-8260-aa53ca712c2a" pod="kube-system/coredns-66bc5c9577-jpzmv"
	Oct 27 23:08:17 pause-180608 kubelet[1315]: E1027 23:08:17.503895    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-180608\" is forbidden: User \"system:node:pause-180608\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-180608' and this object" podUID="12820d4d7830dca4b90efffe49493306" pod="kube-system/kube-scheduler-pause-180608"
	Oct 27 23:08:26 pause-180608 kubelet[1315]: W1027 23:08:26.585198    1315 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 27 23:08:29 pause-180608 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:08:29 pause-180608 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:08:29 pause-180608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-180608 -n pause-180608
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-180608 -n pause-180608: exit status 2 (545.447797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-180608 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-477179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-477179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (338.892472ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:23:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-477179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-477179 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-477179 describe deploy/metrics-server -n kube-system: exit status 1 (125.434877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-477179 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-477179
helpers_test.go:243: (dbg) docker inspect old-k8s-version-477179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373",
	        "Created": "2025-10-27T23:22:26.560712085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1349403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:22:26.642802039Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/hostname",
	        "HostsPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/hosts",
	        "LogPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373-json.log",
	        "Name": "/old-k8s-version-477179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-477179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-477179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373",
	                "LowerDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-477179",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-477179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-477179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-477179",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-477179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0cb9b4377d418846701c8c20909e756a982f1d8a600645e01c33551e2afbce9",
	            "SandboxKey": "/var/run/docker/netns/e0cb9b4377d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34559"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34560"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34563"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34561"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34562"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-477179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:87:da:28:06:6b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70c91a2d56ea508083256c63182c2c3e1ef772ce7bb88e6562d5b5aa2b7beeaf",
	                    "EndpointID": "ac4b252d06778520325b17365d75617e79cf5daca5ac793bc37a8853e2e82150",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-477179",
	                        "431f1160e1d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-477179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-477179 logs -n 25: (1.581404389s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-440075 sudo cat /etc/hosts                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/resolv.conf                                                                                                   │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crictl pods                                                                                                            │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crictl ps --all                                                                                                        │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo ip a s                                                                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo ip r s                                                                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo iptables-save                                                                                                          │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo iptables -t nat -L -n -v                                                                                               │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status kubelet --all --full --no-pager                                                                       │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl cat kubelet --no-pager                                                                                       │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo journalctl -xeu kubelet --all --full --no-pager                                                                        │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/kubernetes/kubelet.conf                                                                                       │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /var/lib/kubelet/config.yaml                                                                                       │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status docker --all --full --no-pager                                                                        │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat docker --no-pager                                                                                        │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/docker/daemon.json                                                                                            │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo docker system info                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl status cri-docker --all --full --no-pager                                                                    │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat cri-docker --no-pager                                                                                    │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                               │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-477179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cri-dockerd --version                                                                                                  │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status containerd --all --full --no-pager                                                                    │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:22:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:22:19.014725 1348730 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:22:19.014973 1348730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:22:19.015003 1348730 out.go:374] Setting ErrFile to fd 2...
	I1027 23:22:19.015022 1348730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:22:19.015358 1348730 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:22:19.015992 1348730 out.go:368] Setting JSON to false
	I1027 23:22:19.017049 1348730 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21888,"bootTime":1761585451,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:22:19.017197 1348730 start.go:143] virtualization:  
	I1027 23:22:19.023257 1348730 out.go:179] * [old-k8s-version-477179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:22:19.026466 1348730 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:22:19.026682 1348730 notify.go:221] Checking for updates...
	I1027 23:22:19.031685 1348730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:22:19.037134 1348730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:22:19.041032 1348730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:22:19.043999 1348730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:22:19.046967 1348730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:22:19.050484 1348730 config.go:182] Loaded profile config "bridge-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:22:19.050593 1348730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:22:19.105721 1348730 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:22:19.105854 1348730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:22:19.216676 1348730 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:22:19.206797201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:22:19.216791 1348730 docker.go:318] overlay module found
	I1027 23:22:19.219886 1348730 out.go:179] * Using the docker driver based on user configuration
	I1027 23:22:19.222637 1348730 start.go:307] selected driver: docker
	I1027 23:22:19.222660 1348730 start.go:928] validating driver "docker" against <nil>
	I1027 23:22:19.222694 1348730 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:22:19.223442 1348730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:22:19.282073 1348730 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:22:19.272674084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:22:19.282229 1348730 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 23:22:19.282553 1348730 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:22:19.285515 1348730 out.go:179] * Using Docker driver with root privileges
	I1027 23:22:19.288352 1348730 cni.go:84] Creating CNI manager for ""
	I1027 23:22:19.288417 1348730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:22:19.288430 1348730 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 23:22:19.288510 1348730 start.go:351] cluster config:
	{Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:22:19.293515 1348730 out.go:179] * Starting "old-k8s-version-477179" primary control-plane node in "old-k8s-version-477179" cluster
	I1027 23:22:19.296760 1348730 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:22:19.300678 1348730 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:22:19.303695 1348730 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 23:22:19.303770 1348730 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1027 23:22:19.303783 1348730 cache.go:59] Caching tarball of preloaded images
	I1027 23:22:19.303794 1348730 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:22:19.303906 1348730 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:22:19.303921 1348730 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 23:22:19.304047 1348730 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/config.json ...
	I1027 23:22:19.304070 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/config.json: {Name:mk5c02b1a9bef7884880a05f0a5b4654cb28f03b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:19.324314 1348730 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:22:19.324335 1348730 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:22:19.324354 1348730 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:22:19.324376 1348730 start.go:360] acquireMachinesLock for old-k8s-version-477179: {Name:mka53febc0a54f4faa3bdae2e66b439a96a1b896 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:22:19.324493 1348730 start.go:364] duration metric: took 99.62µs to acquireMachinesLock for "old-k8s-version-477179"
	I1027 23:22:19.324524 1348730 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:22:19.324592 1348730 start.go:125] createHost starting for "" (driver="docker")
	I1027 23:22:19.371959 1343705 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502164257s
	I1027 23:22:19.394782 1343705 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:22:19.411035 1343705 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:22:19.445479 1343705 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:22:19.445689 1343705 kubeadm.go:319] [mark-control-plane] Marking the node bridge-440075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:22:19.461595 1343705 kubeadm.go:319] [bootstrap-token] Using token: b2nj47.wjqhevuofb2z6dh2
	I1027 23:22:19.464489 1343705 out.go:252]   - Configuring RBAC rules ...
	I1027 23:22:19.464623 1343705 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:22:19.472332 1343705 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:22:19.483900 1343705 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:22:19.488486 1343705 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:22:19.494191 1343705 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:22:19.499748 1343705 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:22:19.772750 1343705 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:22:20.279119 1343705 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:22:20.774538 1343705 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:22:20.775860 1343705 kubeadm.go:319] 
	I1027 23:22:20.775945 1343705 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:22:20.775954 1343705 kubeadm.go:319] 
	I1027 23:22:20.776034 1343705 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:22:20.776049 1343705 kubeadm.go:319] 
	I1027 23:22:20.776076 1343705 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:22:20.776138 1343705 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:22:20.776194 1343705 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:22:20.776202 1343705 kubeadm.go:319] 
	I1027 23:22:20.776259 1343705 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:22:20.776267 1343705 kubeadm.go:319] 
	I1027 23:22:20.776318 1343705 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:22:20.776326 1343705 kubeadm.go:319] 
	I1027 23:22:20.776381 1343705 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:22:20.776462 1343705 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:22:20.776537 1343705 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:22:20.776545 1343705 kubeadm.go:319] 
	I1027 23:22:20.776633 1343705 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:22:20.776717 1343705 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:22:20.776725 1343705 kubeadm.go:319] 
	I1027 23:22:20.776813 1343705 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token b2nj47.wjqhevuofb2z6dh2 \
	I1027 23:22:20.776923 1343705 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:22:20.776948 1343705 kubeadm.go:319] 	--control-plane 
	I1027 23:22:20.776956 1343705 kubeadm.go:319] 
	I1027 23:22:20.777045 1343705 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:22:20.777053 1343705 kubeadm.go:319] 
	I1027 23:22:20.777139 1343705 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token b2nj47.wjqhevuofb2z6dh2 \
	I1027 23:22:20.777259 1343705 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:22:20.781561 1343705 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:22:20.781797 1343705 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:22:20.781927 1343705 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:22:20.781943 1343705 cni.go:84] Creating CNI manager for "bridge"
	I1027 23:22:20.785134 1343705 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 23:22:20.788130 1343705 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 23:22:20.797644 1343705 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 23:22:20.819709 1343705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:22:20.819837 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:20.819894 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-440075 minikube.k8s.io/updated_at=2025_10_27T23_22_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=bridge-440075 minikube.k8s.io/primary=true
	I1027 23:22:20.860957 1343705 ops.go:34] apiserver oom_adj: -16
	I1027 23:22:20.961167 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:21.461659 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:21.962002 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:22.462025 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:19.328170 1348730 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 23:22:19.328419 1348730 start.go:159] libmachine.API.Create for "old-k8s-version-477179" (driver="docker")
	I1027 23:22:19.328463 1348730 client.go:173] LocalClient.Create starting
	I1027 23:22:19.328552 1348730 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem
	I1027 23:22:19.328592 1348730 main.go:143] libmachine: Decoding PEM data...
	I1027 23:22:19.328612 1348730 main.go:143] libmachine: Parsing certificate...
	I1027 23:22:19.328675 1348730 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem
	I1027 23:22:19.328698 1348730 main.go:143] libmachine: Decoding PEM data...
	I1027 23:22:19.328714 1348730 main.go:143] libmachine: Parsing certificate...
	I1027 23:22:19.329065 1348730 cli_runner.go:164] Run: docker network inspect old-k8s-version-477179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 23:22:19.346599 1348730 cli_runner.go:211] docker network inspect old-k8s-version-477179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 23:22:19.346682 1348730 network_create.go:284] running [docker network inspect old-k8s-version-477179] to gather additional debugging logs...
	I1027 23:22:19.346700 1348730 cli_runner.go:164] Run: docker network inspect old-k8s-version-477179
	W1027 23:22:19.362594 1348730 cli_runner.go:211] docker network inspect old-k8s-version-477179 returned with exit code 1
	I1027 23:22:19.362628 1348730 network_create.go:287] error running [docker network inspect old-k8s-version-477179]: docker network inspect old-k8s-version-477179: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-477179 not found
	I1027 23:22:19.362643 1348730 network_create.go:289] output of [docker network inspect old-k8s-version-477179]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-477179 not found
	
	** /stderr **
	I1027 23:22:19.362740 1348730 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:22:19.392996 1348730 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bec5bade6d32 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:b8:32:37:d1:1a} reservation:<nil>}
	I1027 23:22:19.395301 1348730 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0dc359f1a23c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:03:b5:bc:b2:ab} reservation:<nil>}
	I1027 23:22:19.395683 1348730 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6865072e7c41 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:f3:83:1f:14:0e} reservation:<nil>}
	I1027 23:22:19.395926 1348730 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2ee5fa5dfe1e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:f6:00:2c:08:3e} reservation:<nil>}
	I1027 23:22:19.396579 1348730 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a03bb0}
	I1027 23:22:19.396604 1348730 network_create.go:124] attempt to create docker network old-k8s-version-477179 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 23:22:19.396659 1348730 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-477179 old-k8s-version-477179
	I1027 23:22:19.474711 1348730 network_create.go:108] docker network old-k8s-version-477179 192.168.85.0/24 created
	I1027 23:22:19.474742 1348730 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-477179" container
	I1027 23:22:19.474836 1348730 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 23:22:19.494598 1348730 cli_runner.go:164] Run: docker volume create old-k8s-version-477179 --label name.minikube.sigs.k8s.io=old-k8s-version-477179 --label created_by.minikube.sigs.k8s.io=true
	I1027 23:22:19.514194 1348730 oci.go:103] Successfully created a docker volume old-k8s-version-477179
	I1027 23:22:19.514294 1348730 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-477179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-477179 --entrypoint /usr/bin/test -v old-k8s-version-477179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 23:22:20.150264 1348730 oci.go:107] Successfully prepared a docker volume old-k8s-version-477179
	I1027 23:22:20.150311 1348730 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 23:22:20.150332 1348730 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 23:22:20.150444 1348730 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-477179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 23:22:22.961659 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:23.461285 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:23.961861 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:24.462241 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:24.962334 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:25.461907 1343705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:25.560954 1343705 kubeadm.go:1114] duration metric: took 4.741164909s to wait for elevateKubeSystemPrivileges
	I1027 23:22:25.560982 1343705 kubeadm.go:403] duration metric: took 26.200239735s to StartCluster
	I1027 23:22:25.560999 1343705 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:25.561062 1343705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:22:25.561742 1343705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:25.564844 1343705 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:22:25.564963 1343705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:22:25.565274 1343705 config.go:182] Loaded profile config "bridge-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:22:25.565328 1343705 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:22:25.565401 1343705 addons.go:69] Setting storage-provisioner=true in profile "bridge-440075"
	I1027 23:22:25.565415 1343705 addons.go:238] Setting addon storage-provisioner=true in "bridge-440075"
	I1027 23:22:25.565437 1343705 host.go:66] Checking if "bridge-440075" exists ...
	I1027 23:22:25.565970 1343705 cli_runner.go:164] Run: docker container inspect bridge-440075 --format={{.State.Status}}
	I1027 23:22:25.566553 1343705 addons.go:69] Setting default-storageclass=true in profile "bridge-440075"
	I1027 23:22:25.566583 1343705 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-440075"
	I1027 23:22:25.566899 1343705 cli_runner.go:164] Run: docker container inspect bridge-440075 --format={{.State.Status}}
	I1027 23:22:25.585922 1343705 out.go:179] * Verifying Kubernetes components...
	I1027 23:22:25.612215 1343705 addons.go:238] Setting addon default-storageclass=true in "bridge-440075"
	I1027 23:22:25.612254 1343705 host.go:66] Checking if "bridge-440075" exists ...
	I1027 23:22:25.612672 1343705 cli_runner.go:164] Run: docker container inspect bridge-440075 --format={{.State.Status}}
	I1027 23:22:25.618497 1343705 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:22:25.618595 1343705 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:22:25.636323 1343705 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:22:25.636368 1343705 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:22:25.639388 1343705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-440075
	I1027 23:22:25.654541 1343705 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:22:25.654577 1343705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:22:25.654678 1343705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-440075
	I1027 23:22:25.678627 1343705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/bridge-440075/id_rsa Username:docker}
	I1027 23:22:25.689011 1343705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/bridge-440075/id_rsa Username:docker}
	I1027 23:22:25.816729 1343705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:22:25.937132 1343705 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:22:25.991174 1343705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:22:26.003658 1343705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:22:26.796783 1343705 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 23:22:26.798661 1343705 node_ready.go:35] waiting up to 15m0s for node "bridge-440075" to be "Ready" ...
	I1027 23:22:26.863214 1343705 node_ready.go:49] node "bridge-440075" is "Ready"
	I1027 23:22:26.863241 1343705 node_ready.go:38] duration metric: took 64.553598ms for node "bridge-440075" to be "Ready" ...
	I1027 23:22:26.863254 1343705 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:22:26.863311 1343705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:22:27.326699 1343705 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-440075" context rescaled to 1 replicas
	I1027 23:22:28.141509 1343705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.150260827s)
	I1027 23:22:28.141576 1343705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.137843067s)
	I1027 23:22:28.141784 1343705 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.278462365s)
	I1027 23:22:28.141798 1343705 api_server.go:72] duration metric: took 2.576915193s to wait for apiserver process to appear ...
	I1027 23:22:28.141803 1343705 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:22:28.141832 1343705 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:22:28.174294 1343705 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 23:22:28.196707 1343705 api_server.go:141] control plane version: v1.34.1
	I1027 23:22:28.196735 1343705 api_server.go:131] duration metric: took 54.925753ms to wait for apiserver health ...
	I1027 23:22:28.196744 1343705 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:22:28.204714 1343705 system_pods.go:59] 8 kube-system pods found
	I1027 23:22:28.204755 1343705 system_pods.go:61] "coredns-66bc5c9577-6rg98" [6f83c24b-3966-42c9-8d78-3df24d6477de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:22:28.204764 1343705 system_pods.go:61] "coredns-66bc5c9577-czlcs" [894cbfc5-6e37-40f9-a142-a66d791b332c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:22:28.204770 1343705 system_pods.go:61] "etcd-bridge-440075" [592bec8c-771c-4c12-92f9-c33b30cd22c6] Running
	I1027 23:22:28.204780 1343705 system_pods.go:61] "kube-apiserver-bridge-440075" [3aab377e-4d8d-4c4f-8a82-2082c0fbceb8] Running
	I1027 23:22:28.204787 1343705 system_pods.go:61] "kube-controller-manager-bridge-440075" [b108b4bb-dacd-4986-9add-14f3b475d515] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:22:28.204795 1343705 system_pods.go:61] "kube-proxy-rjfzh" [cfe539d2-b33b-4c2a-804a-b046c1a68057] Running
	I1027 23:22:28.204801 1343705 system_pods.go:61] "kube-scheduler-bridge-440075" [f7009b0b-a490-46e3-92f7-f61a01555108] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:22:28.204806 1343705 system_pods.go:61] "storage-provisioner" [0ac1a1ff-b57b-4e71-ad6c-f9a8881e06b8] Pending
	I1027 23:22:28.204819 1343705 system_pods.go:74] duration metric: took 8.06921ms to wait for pod list to return data ...
	I1027 23:22:28.204828 1343705 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:22:28.206294 1343705 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:22:26.416254 1348730 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-477179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.265712127s)
	I1027 23:22:26.416297 1348730 kic.go:203] duration metric: took 6.265955799s to extract preloaded images to volume ...
	W1027 23:22:26.416445 1348730 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 23:22:26.416563 1348730 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 23:22:26.532211 1348730 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-477179 --name old-k8s-version-477179 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-477179 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-477179 --network old-k8s-version-477179 --ip 192.168.85.2 --volume old-k8s-version-477179:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 23:22:26.981940 1348730 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Running}}
	I1027 23:22:27.004410 1348730 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:22:27.031929 1348730 cli_runner.go:164] Run: docker exec old-k8s-version-477179 stat /var/lib/dpkg/alternatives/iptables
	I1027 23:22:27.105281 1348730 oci.go:144] the created container "old-k8s-version-477179" has a running status.
	I1027 23:22:27.105309 1348730 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa...
	I1027 23:22:27.767706 1348730 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 23:22:27.792239 1348730 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:22:27.814550 1348730 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 23:22:27.814576 1348730 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-477179 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 23:22:27.893378 1348730 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:22:27.922365 1348730 machine.go:94] provisionDockerMachine start ...
	I1027 23:22:27.922618 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:27.952417 1348730 main.go:143] libmachine: Using SSH client type: native
	I1027 23:22:27.952758 1348730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I1027 23:22:27.952774 1348730 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:22:27.953468 1348730 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 23:22:28.209324 1343705 addons.go:514] duration metric: took 2.643981147s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:22:28.229507 1343705 default_sa.go:45] found service account: "default"
	I1027 23:22:28.229582 1343705 default_sa.go:55] duration metric: took 24.741627ms for default service account to be created ...
	I1027 23:22:28.229607 1343705 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:22:28.233003 1343705 system_pods.go:86] 8 kube-system pods found
	I1027 23:22:28.233039 1343705 system_pods.go:89] "coredns-66bc5c9577-6rg98" [6f83c24b-3966-42c9-8d78-3df24d6477de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:22:28.233047 1343705 system_pods.go:89] "coredns-66bc5c9577-czlcs" [894cbfc5-6e37-40f9-a142-a66d791b332c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:22:28.233053 1343705 system_pods.go:89] "etcd-bridge-440075" [592bec8c-771c-4c12-92f9-c33b30cd22c6] Running
	I1027 23:22:28.233058 1343705 system_pods.go:89] "kube-apiserver-bridge-440075" [3aab377e-4d8d-4c4f-8a82-2082c0fbceb8] Running
	I1027 23:22:28.233065 1343705 system_pods.go:89] "kube-controller-manager-bridge-440075" [b108b4bb-dacd-4986-9add-14f3b475d515] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:22:28.233069 1343705 system_pods.go:89] "kube-proxy-rjfzh" [cfe539d2-b33b-4c2a-804a-b046c1a68057] Running
	I1027 23:22:28.233075 1343705 system_pods.go:89] "kube-scheduler-bridge-440075" [f7009b0b-a490-46e3-92f7-f61a01555108] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:22:28.233080 1343705 system_pods.go:89] "storage-provisioner" [0ac1a1ff-b57b-4e71-ad6c-f9a8881e06b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:22:28.233109 1343705 retry.go:31] will retry after 310.408131ms: missing components: kube-dns
	I1027 23:22:28.567088 1343705 system_pods.go:86] 8 kube-system pods found
	I1027 23:22:28.567126 1343705 system_pods.go:89] "coredns-66bc5c9577-6rg98" [6f83c24b-3966-42c9-8d78-3df24d6477de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:22:28.567135 1343705 system_pods.go:89] "coredns-66bc5c9577-czlcs" [894cbfc5-6e37-40f9-a142-a66d791b332c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:22:28.567140 1343705 system_pods.go:89] "etcd-bridge-440075" [592bec8c-771c-4c12-92f9-c33b30cd22c6] Running
	I1027 23:22:28.567145 1343705 system_pods.go:89] "kube-apiserver-bridge-440075" [3aab377e-4d8d-4c4f-8a82-2082c0fbceb8] Running
	I1027 23:22:28.567151 1343705 system_pods.go:89] "kube-controller-manager-bridge-440075" [b108b4bb-dacd-4986-9add-14f3b475d515] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:22:28.567156 1343705 system_pods.go:89] "kube-proxy-rjfzh" [cfe539d2-b33b-4c2a-804a-b046c1a68057] Running
	I1027 23:22:28.567162 1343705 system_pods.go:89] "kube-scheduler-bridge-440075" [f7009b0b-a490-46e3-92f7-f61a01555108] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:22:28.567171 1343705 system_pods.go:89] "storage-provisioner" [0ac1a1ff-b57b-4e71-ad6c-f9a8881e06b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:22:28.567178 1343705 system_pods.go:126] duration metric: took 337.552882ms to wait for k8s-apps to be running ...
	I1027 23:22:28.567188 1343705 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:22:28.567252 1343705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:22:28.604767 1343705 system_svc.go:56] duration metric: took 37.569608ms WaitForService to wait for kubelet
	I1027 23:22:28.604796 1343705 kubeadm.go:587] duration metric: took 3.039911925s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:22:28.604826 1343705 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:22:28.630909 1343705 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:22:28.630947 1343705 node_conditions.go:123] node cpu capacity is 2
	I1027 23:22:28.630961 1343705 node_conditions.go:105] duration metric: took 26.129224ms to run NodePressure ...
	I1027 23:22:28.630974 1343705 start.go:242] waiting for startup goroutines ...
	I1027 23:22:28.630981 1343705 start.go:247] waiting for cluster config update ...
	I1027 23:22:28.630992 1343705 start.go:256] writing updated cluster config ...
	I1027 23:22:28.631288 1343705 ssh_runner.go:195] Run: rm -f paused
	I1027 23:22:28.638804 1343705 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:22:28.664434 1343705 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rg98" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:22:30.670437 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	W1027 23:22:32.670824 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	I1027 23:22:31.114750 1348730 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-477179
	
	I1027 23:22:31.114777 1348730 ubuntu.go:182] provisioning hostname "old-k8s-version-477179"
	I1027 23:22:31.114874 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:31.135426 1348730 main.go:143] libmachine: Using SSH client type: native
	I1027 23:22:31.135743 1348730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I1027 23:22:31.135762 1348730 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-477179 && echo "old-k8s-version-477179" | sudo tee /etc/hostname
	I1027 23:22:31.301470 1348730 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-477179
	
	I1027 23:22:31.301556 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:31.319362 1348730 main.go:143] libmachine: Using SSH client type: native
	I1027 23:22:31.319674 1348730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I1027 23:22:31.319691 1348730 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-477179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-477179/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-477179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:22:31.471356 1348730 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:22:31.471434 1348730 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:22:31.471469 1348730 ubuntu.go:190] setting up certificates
	I1027 23:22:31.471505 1348730 provision.go:84] configureAuth start
	I1027 23:22:31.471606 1348730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:22:31.490603 1348730 provision.go:143] copyHostCerts
	I1027 23:22:31.490678 1348730 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:22:31.490693 1348730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:22:31.490776 1348730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:22:31.490875 1348730 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:22:31.490886 1348730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:22:31.490918 1348730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:22:31.490983 1348730 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:22:31.490992 1348730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:22:31.491016 1348730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:22:31.491074 1348730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-477179 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-477179]
	I1027 23:22:32.274506 1348730 provision.go:177] copyRemoteCerts
	I1027 23:22:32.274576 1348730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:22:32.274623 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:32.291765 1348730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:22:32.398100 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:22:32.415936 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1027 23:22:32.434760 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:22:32.454529 1348730 provision.go:87] duration metric: took 982.978951ms to configureAuth
	I1027 23:22:32.454630 1348730 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:22:32.454857 1348730 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:22:32.454996 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:32.477620 1348730 main.go:143] libmachine: Using SSH client type: native
	I1027 23:22:32.477946 1348730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I1027 23:22:32.477967 1348730 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:22:32.764521 1348730 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:22:32.764540 1348730 machine.go:97] duration metric: took 4.842020857s to provisionDockerMachine
	I1027 23:22:32.764550 1348730 client.go:176] duration metric: took 13.436075313s to LocalClient.Create
	I1027 23:22:32.764563 1348730 start.go:167] duration metric: took 13.436146379s to libmachine.API.Create "old-k8s-version-477179"
	I1027 23:22:32.764570 1348730 start.go:293] postStartSetup for "old-k8s-version-477179" (driver="docker")
	I1027 23:22:32.764581 1348730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:22:32.764641 1348730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:22:32.764680 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:32.784399 1348730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:22:32.890654 1348730 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:22:32.894007 1348730 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:22:32.894042 1348730 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:22:32.894054 1348730 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:22:32.894117 1348730 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:22:32.894213 1348730 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:22:32.894322 1348730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:22:32.902135 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:22:32.920641 1348730 start.go:296] duration metric: took 156.055509ms for postStartSetup
	I1027 23:22:32.921018 1348730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:22:32.937831 1348730 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/config.json ...
	I1027 23:22:32.938113 1348730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:22:32.938173 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:32.955064 1348730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:22:33.061736 1348730 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:22:33.068236 1348730 start.go:128] duration metric: took 13.743627267s to createHost
	I1027 23:22:33.068263 1348730 start.go:83] releasing machines lock for "old-k8s-version-477179", held for 13.743757246s
	I1027 23:22:33.068340 1348730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:22:33.085562 1348730 ssh_runner.go:195] Run: cat /version.json
	I1027 23:22:33.085622 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:33.085885 1348730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:22:33.085960 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:22:33.110181 1348730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:22:33.123616 1348730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:22:33.218414 1348730 ssh_runner.go:195] Run: systemctl --version
	I1027 23:22:33.313400 1348730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:22:33.350696 1348730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:22:33.355764 1348730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:22:33.355868 1348730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:22:33.386552 1348730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 23:22:33.386631 1348730 start.go:496] detecting cgroup driver to use...
	I1027 23:22:33.386682 1348730 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:22:33.386739 1348730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:22:33.405847 1348730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:22:33.419696 1348730 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:22:33.419756 1348730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:22:33.438482 1348730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:22:33.459327 1348730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:22:33.579368 1348730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:22:33.712642 1348730 docker.go:234] disabling docker service ...
	I1027 23:22:33.712738 1348730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:22:33.734800 1348730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:22:33.749094 1348730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:22:33.877068 1348730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:22:33.990075 1348730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:22:34.003812 1348730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:22:34.021021 1348730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1027 23:22:34.021148 1348730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:22:34.030913 1348730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:22:34.031043 1348730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:22:34.041262 1348730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:22:34.050292 1348730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:22:34.061363 1348730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:22:34.072002 1348730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:22:34.081086 1348730 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:22:34.095386 1348730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:22:34.105139 1348730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:22:34.113047 1348730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:22:34.120772 1348730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:22:34.246216 1348730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:22:34.376694 1348730 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:22:34.376809 1348730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:22:34.381273 1348730 start.go:564] Will wait 60s for crictl version
	I1027 23:22:34.381404 1348730 ssh_runner.go:195] Run: which crictl
	I1027 23:22:34.385924 1348730 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:22:34.412002 1348730 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:22:34.412135 1348730 ssh_runner.go:195] Run: crio --version
	I1027 23:22:34.447485 1348730 ssh_runner.go:195] Run: crio --version
	I1027 23:22:34.478583 1348730 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1027 23:22:34.481389 1348730 cli_runner.go:164] Run: docker network inspect old-k8s-version-477179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:22:34.497924 1348730 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:22:34.502601 1348730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:22:34.513354 1348730 kubeadm.go:884] updating cluster {Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:22:34.513466 1348730 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 23:22:34.513522 1348730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:22:34.546137 1348730 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:22:34.546162 1348730 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:22:34.546219 1348730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:22:34.571466 1348730 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:22:34.571492 1348730 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:22:34.571502 1348730 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1027 23:22:34.571587 1348730 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-477179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:22:34.571677 1348730 ssh_runner.go:195] Run: crio config
	I1027 23:22:34.632230 1348730 cni.go:84] Creating CNI manager for ""
	I1027 23:22:34.632302 1348730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:22:34.632333 1348730 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:22:34.632387 1348730 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-477179 NodeName:old-k8s-version-477179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:22:34.632574 1348730 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-477179"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:22:34.632693 1348730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1027 23:22:34.640748 1348730 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:22:34.640952 1348730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:22:34.649311 1348730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1027 23:22:34.663390 1348730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:22:34.678519 1348730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1027 23:22:34.693095 1348730 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:22:34.696808 1348730 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:22:34.706706 1348730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:22:34.820629 1348730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:22:34.837140 1348730 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179 for IP: 192.168.85.2
	I1027 23:22:34.837175 1348730 certs.go:195] generating shared ca certs ...
	I1027 23:22:34.837209 1348730 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:34.837387 1348730 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:22:34.837458 1348730 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:22:34.837470 1348730 certs.go:257] generating profile certs ...
	I1027 23:22:34.837543 1348730 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.key
	I1027 23:22:34.837566 1348730 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt with IP's: []
	I1027 23:22:35.465533 1348730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt ...
	I1027 23:22:35.465569 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: {Name:mk3194cb184aac332161a6d5e88caef87f5750cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:35.465778 1348730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.key ...
	I1027 23:22:35.465793 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.key: {Name:mkfb3e831e54923285015dcb22ca12a3b817ea39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:35.465900 1348730 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key.e54ee9ff
	I1027 23:22:35.465918 1348730 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt.e54ee9ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 23:22:36.140684 1348730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt.e54ee9ff ...
	I1027 23:22:36.140715 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt.e54ee9ff: {Name:mk250a828348a5b798c873d57783dc0d296bf1e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:36.140919 1348730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key.e54ee9ff ...
	I1027 23:22:36.140938 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key.e54ee9ff: {Name:mk4f586964fa49f06cb4cacf764409cc3391c15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:36.141031 1348730 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt.e54ee9ff -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt
	I1027 23:22:36.141112 1348730 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key.e54ee9ff -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key
	I1027 23:22:36.141180 1348730 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key
	I1027 23:22:36.141200 1348730 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.crt with IP's: []
	I1027 23:22:36.330151 1348730 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.crt ...
	I1027 23:22:36.330183 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.crt: {Name:mk001558e78710583ab3f8ebe5d1eca95e8c49f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:36.330368 1348730 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key ...
	I1027 23:22:36.330414 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key: {Name:mk5e823499036f727f8d13b4054f9aac43005df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:22:36.330662 1348730 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:22:36.330708 1348730 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:22:36.330730 1348730 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:22:36.330755 1348730 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:22:36.330782 1348730 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:22:36.330806 1348730 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:22:36.330850 1348730 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:22:36.331476 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:22:36.351994 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:22:36.376292 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:22:36.399221 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:22:36.418560 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 23:22:36.439107 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:22:36.463641 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:22:36.483865 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:22:36.503871 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:22:36.525687 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:22:36.546176 1348730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:22:36.566314 1348730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:22:36.579879 1348730 ssh_runner.go:195] Run: openssl version
	I1027 23:22:36.586131 1348730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:22:36.594342 1348730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:22:36.598263 1348730 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:22:36.598325 1348730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:22:36.639761 1348730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:22:36.648378 1348730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:22:36.657047 1348730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:22:36.660970 1348730 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:22:36.661044 1348730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:22:36.705270 1348730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:22:36.713371 1348730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:22:36.721559 1348730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:22:36.725487 1348730 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:22:36.725594 1348730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:22:36.768532 1348730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:22:36.777209 1348730 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:22:36.781070 1348730 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:22:36.781150 1348730 kubeadm.go:401] StartCluster: {Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:22:36.781244 1348730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:22:36.781313 1348730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:22:36.808524 1348730 cri.go:89] found id: ""
	I1027 23:22:36.808644 1348730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:22:36.816889 1348730 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:22:36.825694 1348730 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 23:22:36.825770 1348730 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:22:36.834217 1348730 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:22:36.834239 1348730 kubeadm.go:158] found existing configuration files:
	
	I1027 23:22:36.834323 1348730 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:22:36.842517 1348730 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:22:36.842670 1348730 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:22:36.850747 1348730 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:22:36.859095 1348730 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:22:36.859161 1348730 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:22:36.866560 1348730 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:22:36.874532 1348730 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:22:36.874618 1348730 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:22:36.882059 1348730 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:22:36.890028 1348730 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:22:36.890122 1348730 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:22:36.897908 1348730 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 23:22:36.943457 1348730 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1027 23:22:36.943527 1348730 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:22:36.989595 1348730 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 23:22:36.989673 1348730 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 23:22:36.989717 1348730 kubeadm.go:319] OS: Linux
	I1027 23:22:36.989768 1348730 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 23:22:36.989840 1348730 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 23:22:36.989896 1348730 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 23:22:36.989951 1348730 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 23:22:36.990005 1348730 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 23:22:36.990059 1348730 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 23:22:36.990110 1348730 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 23:22:36.990165 1348730 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 23:22:36.990228 1348730 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 23:22:37.080600 1348730 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:22:37.080768 1348730 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:22:37.080889 1348730 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1027 23:22:37.266551 1348730 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1027 23:22:35.171859 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	W1027 23:22:37.672689 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	I1027 23:22:37.271841 1348730 out.go:252]   - Generating certificates and keys ...
	I1027 23:22:37.271943 1348730 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:22:37.272027 1348730 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:22:37.688842 1348730 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:22:38.202910 1348730 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:22:38.661469 1348730 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	W1027 23:22:40.172552 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	W1027 23:22:42.672515 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	I1027 23:22:39.146171 1348730 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:22:39.589357 1348730 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:22:39.589861 1348730 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-477179] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:22:40.012500 1348730 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:22:40.012741 1348730 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-477179] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:22:41.154981 1348730 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:22:41.388546 1348730 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:22:42.489746 1348730 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:22:42.490311 1348730 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:22:42.682660 1348730 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:22:43.591205 1348730 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:22:43.945781 1348730 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:22:44.118244 1348730 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:22:44.118996 1348730 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:22:44.121711 1348730 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 23:22:44.674094 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	W1027 23:22:47.170314 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	I1027 23:22:44.125178 1348730 out.go:252]   - Booting up control plane ...
	I1027 23:22:44.125357 1348730 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:22:44.125763 1348730 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:22:44.126642 1348730 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:22:44.143948 1348730 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:22:44.144905 1348730 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:22:44.144956 1348730 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:22:44.282964 1348730 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1027 23:22:51.284258 1348730 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.002489 seconds
	I1027 23:22:51.285057 1348730 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:22:51.304635 1348730 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:22:51.838843 1348730 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:22:51.839057 1348730 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-477179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:22:52.358821 1348730 kubeadm.go:319] [bootstrap-token] Using token: mgxavw.1vd07er4wija5xpu
	W1027 23:22:49.171478 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	W1027 23:22:51.670671 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	I1027 23:22:52.361822 1348730 out.go:252]   - Configuring RBAC rules ...
	I1027 23:22:52.362042 1348730 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:22:52.383936 1348730 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:22:52.405913 1348730 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:22:52.412274 1348730 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:22:52.416881 1348730 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:22:52.421317 1348730 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:22:52.443325 1348730 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:22:52.755093 1348730 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:22:52.794539 1348730 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:22:52.796043 1348730 kubeadm.go:319] 
	I1027 23:22:52.796120 1348730 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:22:52.796126 1348730 kubeadm.go:319] 
	I1027 23:22:52.796206 1348730 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:22:52.796225 1348730 kubeadm.go:319] 
	I1027 23:22:52.796252 1348730 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:22:52.796314 1348730 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:22:52.796366 1348730 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:22:52.796371 1348730 kubeadm.go:319] 
	I1027 23:22:52.796427 1348730 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:22:52.796432 1348730 kubeadm.go:319] 
	I1027 23:22:52.796482 1348730 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:22:52.796486 1348730 kubeadm.go:319] 
	I1027 23:22:52.796540 1348730 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:22:52.796619 1348730 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:22:52.796690 1348730 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:22:52.796694 1348730 kubeadm.go:319] 
	I1027 23:22:52.796782 1348730 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:22:52.796867 1348730 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:22:52.796872 1348730 kubeadm.go:319] 
	I1027 23:22:52.796959 1348730 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mgxavw.1vd07er4wija5xpu \
	I1027 23:22:52.797066 1348730 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:22:52.797088 1348730 kubeadm.go:319] 	--control-plane 
	I1027 23:22:52.797093 1348730 kubeadm.go:319] 
	I1027 23:22:52.797181 1348730 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:22:52.797189 1348730 kubeadm.go:319] 
	I1027 23:22:52.797274 1348730 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mgxavw.1vd07er4wija5xpu \
	I1027 23:22:52.797380 1348730 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:22:52.803111 1348730 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:22:52.803305 1348730 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:22:52.803352 1348730 cni.go:84] Creating CNI manager for ""
	I1027 23:22:52.803374 1348730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:22:52.806635 1348730 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 23:22:52.809515 1348730 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:22:52.828960 1348730 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1027 23:22:52.828979 1348730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:22:52.869004 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:22:53.814591 1348730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:22:53.814737 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:53.814806 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-477179 minikube.k8s.io/updated_at=2025_10_27T23_22_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=old-k8s-version-477179 minikube.k8s.io/primary=true
	I1027 23:22:53.949882 1348730 ops.go:34] apiserver oom_adj: -16
	I1027 23:22:53.950042 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1027 23:22:53.671705 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	W1027 23:22:56.170090 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	I1027 23:22:54.450290 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:54.950700 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:55.450639 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:55.950443 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:56.450362 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:56.950997 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:57.450865 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:57.950793 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:58.450777 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:58.950937 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1027 23:22:58.170723 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	W1027 23:23:00.193395 1343705 pod_ready.go:104] pod "coredns-66bc5c9577-6rg98" is not "Ready", error: <nil>
	I1027 23:23:02.669924 1343705 pod_ready.go:94] pod "coredns-66bc5c9577-6rg98" is "Ready"
	I1027 23:23:02.670006 1343705 pod_ready.go:86] duration metric: took 34.005541088s for pod "coredns-66bc5c9577-6rg98" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:02.670031 1343705 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czlcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:02.672177 1343705 pod_ready.go:99] pod "coredns-66bc5c9577-czlcs" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-czlcs" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-czlcs" not found
	I1027 23:23:02.672201 1343705 pod_ready.go:86] duration metric: took 2.155591ms for pod "coredns-66bc5c9577-czlcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:02.674927 1343705 pod_ready.go:83] waiting for pod "etcd-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:02.679455 1343705 pod_ready.go:94] pod "etcd-bridge-440075" is "Ready"
	I1027 23:23:02.679484 1343705 pod_ready.go:86] duration metric: took 4.529369ms for pod "etcd-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:02.681687 1343705 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:02.688185 1343705 pod_ready.go:94] pod "kube-apiserver-bridge-440075" is "Ready"
	I1027 23:23:02.688225 1343705 pod_ready.go:86] duration metric: took 6.509081ms for pod "kube-apiserver-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:02.691401 1343705 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:22:59.450916 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:22:59.950310 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:00.451149 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:00.950221 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:01.450695 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:01.950442 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:02.450474 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:02.950163 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:03.450237 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:03.950796 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:03.069319 1343705 pod_ready.go:94] pod "kube-controller-manager-bridge-440075" is "Ready"
	I1027 23:23:03.069393 1343705 pod_ready.go:86] duration metric: took 377.967765ms for pod "kube-controller-manager-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:03.267953 1343705 pod_ready.go:83] waiting for pod "kube-proxy-rjfzh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:03.667238 1343705 pod_ready.go:94] pod "kube-proxy-rjfzh" is "Ready"
	I1027 23:23:03.667268 1343705 pod_ready.go:86] duration metric: took 399.287329ms for pod "kube-proxy-rjfzh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:03.867487 1343705 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:04.268228 1343705 pod_ready.go:94] pod "kube-scheduler-bridge-440075" is "Ready"
	I1027 23:23:04.268299 1343705 pod_ready.go:86] duration metric: took 400.784351ms for pod "kube-scheduler-bridge-440075" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:04.268327 1343705 pod_ready.go:40] duration metric: took 35.629484597s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:23:04.329972 1343705 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:23:04.333170 1343705 out.go:179] * Done! kubectl is now configured to use "bridge-440075" cluster and "default" namespace by default
	I1027 23:23:04.453273 1348730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:23:04.608320 1348730 kubeadm.go:1114] duration metric: took 10.793631333s to wait for elevateKubeSystemPrivileges
	I1027 23:23:04.608351 1348730 kubeadm.go:403] duration metric: took 27.82722505s to StartCluster
	I1027 23:23:04.608369 1348730 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:04.608429 1348730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:23:04.609444 1348730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:04.609674 1348730 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:23:04.609880 1348730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:23:04.610260 1348730 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:23:04.610300 1348730 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:23:04.610358 1348730 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-477179"
	I1027 23:23:04.610375 1348730 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-477179"
	I1027 23:23:04.610427 1348730 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:04.610574 1348730 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-477179"
	I1027 23:23:04.610597 1348730 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-477179"
	I1027 23:23:04.610905 1348730 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:04.610916 1348730 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:04.614901 1348730 out.go:179] * Verifying Kubernetes components...
	I1027 23:23:04.620579 1348730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:04.670967 1348730 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-477179"
	I1027 23:23:04.671007 1348730 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:04.671427 1348730 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:04.686904 1348730 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:04.689925 1348730 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:23:04.689950 1348730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:23:04.690011 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:04.724515 1348730 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:23:04.724536 1348730 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:23:04.724596 1348730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:04.739742 1348730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:04.764413 1348730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:05.241820 1348730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:23:05.308404 1348730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:23:05.329555 1348730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:23:05.329664 1348730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:23:06.561939 1348730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320087776s)
	I1027 23:23:06.562004 1348730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.253578311s)
	I1027 23:23:06.562339 1348730 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.232656226s)
	I1027 23:23:06.563202 1348730 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-477179" to be "Ready" ...
	I1027 23:23:06.563453 1348730 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.233869043s)
	I1027 23:23:06.563470 1348730 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 23:23:06.643871 1348730 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:23:06.646708 1348730 addons.go:514] duration metric: took 2.036393902s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:23:07.071781 1348730 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-477179" context rescaled to 1 replicas
	W1027 23:23:08.566623 1348730 node_ready.go:57] node "old-k8s-version-477179" has "Ready":"False" status (will retry)
	W1027 23:23:10.566920 1348730 node_ready.go:57] node "old-k8s-version-477179" has "Ready":"False" status (will retry)
	W1027 23:23:13.066514 1348730 node_ready.go:57] node "old-k8s-version-477179" has "Ready":"False" status (will retry)
	W1027 23:23:15.067184 1348730 node_ready.go:57] node "old-k8s-version-477179" has "Ready":"False" status (will retry)
	W1027 23:23:17.566332 1348730 node_ready.go:57] node "old-k8s-version-477179" has "Ready":"False" status (will retry)
	W1027 23:23:19.566417 1348730 node_ready.go:57] node "old-k8s-version-477179" has "Ready":"False" status (will retry)
	I1027 23:23:20.074917 1348730 node_ready.go:49] node "old-k8s-version-477179" is "Ready"
	I1027 23:23:20.074954 1348730 node_ready.go:38] duration metric: took 13.511723629s for node "old-k8s-version-477179" to be "Ready" ...
	I1027 23:23:20.074971 1348730 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:23:20.075037 1348730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:23:20.094007 1348730 api_server.go:72] duration metric: took 15.484294693s to wait for apiserver process to appear ...
	I1027 23:23:20.094040 1348730 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:23:20.094062 1348730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:23:20.104558 1348730 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:23:20.106129 1348730 api_server.go:141] control plane version: v1.28.0
	I1027 23:23:20.106160 1348730 api_server.go:131] duration metric: took 12.112106ms to wait for apiserver health ...
	I1027 23:23:20.106170 1348730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:23:20.110204 1348730 system_pods.go:59] 8 kube-system pods found
	I1027 23:23:20.110250 1348730 system_pods.go:61] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:23:20.110257 1348730 system_pods.go:61] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running
	I1027 23:23:20.110263 1348730 system_pods.go:61] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:23:20.110268 1348730 system_pods.go:61] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running
	I1027 23:23:20.110273 1348730 system_pods.go:61] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running
	I1027 23:23:20.110278 1348730 system_pods.go:61] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:23:20.110284 1348730 system_pods.go:61] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running
	I1027 23:23:20.110291 1348730 system_pods.go:61] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:23:20.110298 1348730 system_pods.go:74] duration metric: took 4.122233ms to wait for pod list to return data ...
	I1027 23:23:20.110307 1348730 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:23:20.113358 1348730 default_sa.go:45] found service account: "default"
	I1027 23:23:20.113387 1348730 default_sa.go:55] duration metric: took 3.030008ms for default service account to be created ...
	I1027 23:23:20.113397 1348730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:23:20.117113 1348730 system_pods.go:86] 8 kube-system pods found
	I1027 23:23:20.117149 1348730 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:23:20.117156 1348730 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running
	I1027 23:23:20.117164 1348730 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:23:20.117168 1348730 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running
	I1027 23:23:20.117174 1348730 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running
	I1027 23:23:20.117178 1348730 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:23:20.117182 1348730 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running
	I1027 23:23:20.117193 1348730 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:23:20.117218 1348730 retry.go:31] will retry after 239.793858ms: missing components: kube-dns
	I1027 23:23:20.361578 1348730 system_pods.go:86] 8 kube-system pods found
	I1027 23:23:20.361613 1348730 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:23:20.361621 1348730 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running
	I1027 23:23:20.361629 1348730 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:23:20.361634 1348730 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running
	I1027 23:23:20.361639 1348730 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running
	I1027 23:23:20.361642 1348730 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:23:20.361647 1348730 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running
	I1027 23:23:20.361653 1348730 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:23:20.361672 1348730 retry.go:31] will retry after 294.353208ms: missing components: kube-dns
	I1027 23:23:20.661199 1348730 system_pods.go:86] 8 kube-system pods found
	I1027 23:23:20.661236 1348730 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:23:20.661243 1348730 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running
	I1027 23:23:20.661250 1348730 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:23:20.661255 1348730 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running
	I1027 23:23:20.661259 1348730 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running
	I1027 23:23:20.661263 1348730 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:23:20.661267 1348730 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running
	I1027 23:23:20.661273 1348730 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:23:20.661292 1348730 retry.go:31] will retry after 480.489071ms: missing components: kube-dns
	I1027 23:23:21.146661 1348730 system_pods.go:86] 8 kube-system pods found
	I1027 23:23:21.146698 1348730 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:23:21.146705 1348730 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running
	I1027 23:23:21.146712 1348730 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:23:21.146717 1348730 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running
	I1027 23:23:21.146722 1348730 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running
	I1027 23:23:21.146725 1348730 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:23:21.146730 1348730 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running
	I1027 23:23:21.146734 1348730 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Running
	I1027 23:23:21.146749 1348730 retry.go:31] will retry after 548.545903ms: missing components: kube-dns
	I1027 23:23:21.700127 1348730 system_pods.go:86] 8 kube-system pods found
	I1027 23:23:21.700163 1348730 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:23:21.700170 1348730 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running
	I1027 23:23:21.700214 1348730 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:23:21.700226 1348730 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running
	I1027 23:23:21.700231 1348730 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running
	I1027 23:23:21.700235 1348730 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:23:21.700239 1348730 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running
	I1027 23:23:21.700257 1348730 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Running
	I1027 23:23:21.700276 1348730 retry.go:31] will retry after 566.657359ms: missing components: kube-dns
	I1027 23:23:22.271742 1348730 system_pods.go:86] 8 kube-system pods found
	I1027 23:23:22.271778 1348730 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Running
	I1027 23:23:22.271785 1348730 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running
	I1027 23:23:22.271792 1348730 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:23:22.271796 1348730 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running
	I1027 23:23:22.271802 1348730 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running
	I1027 23:23:22.271806 1348730 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:23:22.271811 1348730 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running
	I1027 23:23:22.271815 1348730 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Running
	I1027 23:23:22.271824 1348730 system_pods.go:126] duration metric: took 2.158419931s to wait for k8s-apps to be running ...
	I1027 23:23:22.271834 1348730 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:23:22.271891 1348730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:23:22.285167 1348730 system_svc.go:56] duration metric: took 13.314814ms WaitForService to wait for kubelet
	I1027 23:23:22.285196 1348730 kubeadm.go:587] duration metric: took 17.675488381s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:23:22.285221 1348730 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:23:22.287925 1348730 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:23:22.287958 1348730 node_conditions.go:123] node cpu capacity is 2
	I1027 23:23:22.287972 1348730 node_conditions.go:105] duration metric: took 2.74545ms to run NodePressure ...
	I1027 23:23:22.287984 1348730 start.go:242] waiting for startup goroutines ...
	I1027 23:23:22.287992 1348730 start.go:247] waiting for cluster config update ...
	I1027 23:23:22.288006 1348730 start.go:256] writing updated cluster config ...
	I1027 23:23:22.288286 1348730 ssh_runner.go:195] Run: rm -f paused
	I1027 23:23:22.291810 1348730 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:23:22.296225 1348730 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-zmrh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.301561 1348730 pod_ready.go:94] pod "coredns-5dd5756b68-zmrh9" is "Ready"
	I1027 23:23:22.301586 1348730 pod_ready.go:86] duration metric: took 5.337093ms for pod "coredns-5dd5756b68-zmrh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.304700 1348730 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.309511 1348730 pod_ready.go:94] pod "etcd-old-k8s-version-477179" is "Ready"
	I1027 23:23:22.309582 1348730 pod_ready.go:86] duration metric: took 4.857611ms for pod "etcd-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.312807 1348730 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.317466 1348730 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-477179" is "Ready"
	I1027 23:23:22.317493 1348730 pod_ready.go:86] duration metric: took 4.661859ms for pod "kube-apiserver-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.320549 1348730 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.695741 1348730 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-477179" is "Ready"
	I1027 23:23:22.695770 1348730 pod_ready.go:86] duration metric: took 375.195889ms for pod "kube-controller-manager-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:22.897331 1348730 pod_ready.go:83] waiting for pod "kube-proxy-t6hvl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:23.296201 1348730 pod_ready.go:94] pod "kube-proxy-t6hvl" is "Ready"
	I1027 23:23:23.296229 1348730 pod_ready.go:86] duration metric: took 398.868487ms for pod "kube-proxy-t6hvl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:23.497149 1348730 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:23.896459 1348730 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-477179" is "Ready"
	I1027 23:23:23.896490 1348730 pod_ready.go:86] duration metric: took 399.31454ms for pod "kube-scheduler-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:23:23.896503 1348730 pod_ready.go:40] duration metric: took 1.604659933s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:23:23.958621 1348730 start.go:626] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1027 23:23:23.961919 1348730 out.go:203] 
	W1027 23:23:23.964961 1348730 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 23:23:23.968002 1348730 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 23:23:23.971781 1348730 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-477179" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:23:21 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:21.781412229Z" level=info msg="Created container d93e7ae2c4810d5e24b69bd8a95f05cd218cd4052893f5b10dbdeb02c0533217: kube-system/coredns-5dd5756b68-zmrh9/coredns" id=c130aca7-1f8d-4142-8372-b19f7bb0b5df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:23:21 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:21.782917719Z" level=info msg="Starting container: d93e7ae2c4810d5e24b69bd8a95f05cd218cd4052893f5b10dbdeb02c0533217" id=5c6ac845-e9c5-4d53-9f2e-314bd25540ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:23:21 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:21.788213302Z" level=info msg="Started container" PID=1936 containerID=d93e7ae2c4810d5e24b69bd8a95f05cd218cd4052893f5b10dbdeb02c0533217 description=kube-system/coredns-5dd5756b68-zmrh9/coredns id=5c6ac845-e9c5-4d53-9f2e-314bd25540ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=9065e9b8d7f6131b8bcd060137a1c251aa04614d34dcd3cbbefbe6f8dcb3fe3a
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.54392204Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f266aa97-c8ae-4f00-91ec-d7fcdeeb3612 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.544003928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.550127124Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7e71329f0ce97ab3a82ddaf9eb06aaffd34fb0e3af2de19da88c1b5d9c7cca33 UID:d61db7c2-37e3-45dd-a444-eb086de138ff NetNS:/var/run/netns/e0a0e1fa-039a-4b7a-a164-ad8022b718d6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012dba0}] Aliases:map[]}"
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.550167913Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.567175038Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7e71329f0ce97ab3a82ddaf9eb06aaffd34fb0e3af2de19da88c1b5d9c7cca33 UID:d61db7c2-37e3-45dd-a444-eb086de138ff NetNS:/var/run/netns/e0a0e1fa-039a-4b7a-a164-ad8022b718d6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012dba0}] Aliases:map[]}"
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.567317325Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.574722829Z" level=info msg="Ran pod sandbox 7e71329f0ce97ab3a82ddaf9eb06aaffd34fb0e3af2de19da88c1b5d9c7cca33 with infra container: default/busybox/POD" id=f266aa97-c8ae-4f00-91ec-d7fcdeeb3612 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.575771697Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=af292784-f21e-4b62-90cb-026b4881d4b3 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.575925701Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=af292784-f21e-4b62-90cb-026b4881d4b3 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.575984607Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=af292784-f21e-4b62-90cb-026b4881d4b3 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.576723341Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3cc0e96b-9387-437f-8000-4f4f4ac9605c name=/runtime.v1.ImageService/PullImage
	Oct 27 23:23:24 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:24.583129028Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.597535803Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3cc0e96b-9387-437f-8000-4f4f4ac9605c name=/runtime.v1.ImageService/PullImage
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.601088411Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4443fe72-7f23-4b13-9d9f-e58687b59c35 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.60314906Z" level=info msg="Creating container: default/busybox/busybox" id=fe8e2c6b-9128-4c5c-8b30-c33b96c5175c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.603453663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.616437388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.61709672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.64000431Z" level=info msg="Created container d70124a3631a52b418d3b071b660d55842ecb3f3c21a78ec2dc96cda28541e5f: default/busybox/busybox" id=fe8e2c6b-9128-4c5c-8b30-c33b96c5175c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.643688441Z" level=info msg="Starting container: d70124a3631a52b418d3b071b660d55842ecb3f3c21a78ec2dc96cda28541e5f" id=c6970dfa-32d6-488a-b0ae-c3bb62fe05de name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:23:26 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:26.653199862Z" level=info msg="Started container" PID=1992 containerID=d70124a3631a52b418d3b071b660d55842ecb3f3c21a78ec2dc96cda28541e5f description=default/busybox/busybox id=c6970dfa-32d6-488a-b0ae-c3bb62fe05de name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e71329f0ce97ab3a82ddaf9eb06aaffd34fb0e3af2de19da88c1b5d9c7cca33
	Oct 27 23:23:33 old-k8s-version-477179 crio[837]: time="2025-10-27T23:23:33.386256323Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d70124a3631a5       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   7e71329f0ce97       busybox                                          default
	d93e7ae2c4810       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   9065e9b8d7f61       coredns-5dd5756b68-zmrh9                         kube-system
	554268ad8a359       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago      Running             storage-provisioner       0                   89fa2f92e7c66       storage-provisioner                              kube-system
	af1a24a33709d       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   95dbfe852d300       kindnet-z26d6                                    kube-system
	4cfd2b2c7063d       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      29 seconds ago      Running             kube-proxy                0                   60d5adca48901       kube-proxy-t6hvl                                 kube-system
	208a1606840e2       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   972d784bf713b       kube-apiserver-old-k8s-version-477179            kube-system
	0b74f714372e9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   4fc1f5e0e296a       etcd-old-k8s-version-477179                      kube-system
	7ea7069a9ac04       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   09ae005bae1d5       kube-scheduler-old-k8s-version-477179            kube-system
	9719477d64455       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   68a1b05ac7e04       kube-controller-manager-old-k8s-version-477179   kube-system
	
	
	==> coredns [d93e7ae2c4810d5e24b69bd8a95f05cd218cd4052893f5b10dbdeb02c0533217] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52014 - 39622 "HINFO IN 6533057563642037500.650166810582254019. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024123821s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-477179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-477179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=old-k8s-version-477179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_22_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:22:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-477179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:23:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:23:23 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:23:23 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:23:23 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:23:23 +0000   Mon, 27 Oct 2025 23:23:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-477179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c71561b3-c618-4514-9439-9c8988ccb8a0
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-zmrh9                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-old-k8s-version-477179                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-z26d6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-477179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-477179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-t6hvl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-477179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-477179 event: Registered Node old-k8s-version-477179 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-477179 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct27 22:59] overlayfs: idmapped layers are currently not supported
	[ +25.315146] overlayfs: idmapped layers are currently not supported
	[  +1.719322] overlayfs: idmapped layers are currently not supported
	[Oct27 23:00] overlayfs: idmapped layers are currently not supported
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0b74f714372e97eb1a3cc0d3ccd26e7c33de9d07091992ace62390d31167275f] <==
	{"level":"info","ts":"2025-10-27T23:22:45.80039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-27T23:22:45.804398Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-27T23:22:45.803905Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T23:22:45.804761Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T23:22:45.804833Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T23:22:45.803935Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T23:22:45.805051Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T23:22:46.558426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-27T23:22:46.558541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-27T23:22:46.558598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-27T23:22:46.558638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-27T23:22:46.558673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T23:22:46.558715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-27T23:22:46.558753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T23:22:46.562482Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:22:46.566577Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-477179 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T23:22:46.57045Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:22:46.570472Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T23:22:46.570564Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T23:22:46.570619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T23:22:46.570527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:22:46.570754Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:22:46.570539Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T23:22:46.571754Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-27T23:22:46.572036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:23:35 up  6:06,  0 user,  load average: 2.56, 3.38, 3.01
	Linux old-k8s-version-477179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [af1a24a33709da060698c05fa5d456d9164a87d0d6844fab1feb905e50a1faa1] <==
	I1027 23:23:09.438361       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:23:09.438737       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:23:09.438879       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:23:09.438890       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:23:09.438904       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:23:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:23:09.718356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:23:09.718653       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:23:09.718698       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:23:09.719548       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 23:23:09.827054       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:23:09.914464       1 metrics.go:72] Registering metrics
	I1027 23:23:09.914736       1 controller.go:711] "Syncing nftables rules"
	I1027 23:23:19.726554       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:23:19.726619       1 main.go:301] handling current node
	I1027 23:23:29.719561       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:23:29.719613       1 main.go:301] handling current node
	
	
	==> kube-apiserver [208a1606840e2b74c1813f771a76e6ba3e9210652bc32f39c47859006eafc3d9] <==
	I1027 23:22:49.507548       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 23:22:49.508926       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 23:22:49.510707       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 23:22:49.517649       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 23:22:49.517988       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 23:22:49.518129       1 aggregator.go:166] initial CRD sync complete...
	I1027 23:22:49.518169       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 23:22:49.518209       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:22:49.518246       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:22:49.551198       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:22:50.123507       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 23:22:50.133692       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 23:22:50.134466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:22:50.880138       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:22:50.928621       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:22:51.062688       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 23:22:51.076108       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 23:22:51.077628       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 23:22:51.085616       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:22:51.291476       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 23:22:52.734049       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 23:22:52.753493       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 23:22:52.766440       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1027 23:23:04.882024       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 23:23:05.025713       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9719477d6445504db80be147a69614213639a65bda221744ee775cd707f21291] <==
	I1027 23:23:04.297842       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 23:23:04.299138       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1027 23:23:04.725890       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 23:23:04.746759       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 23:23:04.746790       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 23:23:04.939655       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1027 23:23:05.280247       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z26d6"
	I1027 23:23:05.305007       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t6hvl"
	I1027 23:23:05.310621       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-zmrh9"
	I1027 23:23:05.366458       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8frhk"
	I1027 23:23:05.392828       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="453.479006ms"
	I1027 23:23:05.431839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.964269ms"
	I1027 23:23:05.431966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.412µs"
	I1027 23:23:05.449025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.853µs"
	I1027 23:23:06.652554       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1027 23:23:06.696643       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8frhk"
	I1027 23:23:06.725545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.753319ms"
	I1027 23:23:06.738051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.458261ms"
	I1027 23:23:06.738337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="146.226µs"
	I1027 23:23:19.938680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.663µs"
	I1027 23:23:19.989720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.718µs"
	I1027 23:23:22.115516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="160.174µs"
	I1027 23:23:22.153756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.027767ms"
	I1027 23:23:22.154573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.526µs"
	I1027 23:23:24.200771       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [4cfd2b2c7063d1fc298937dc3358e21567cd7c5a13cc80df9c628577ceb5c937] <==
	I1027 23:23:05.811140       1 server_others.go:69] "Using iptables proxy"
	I1027 23:23:05.831497       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1027 23:23:05.866506       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:23:05.869292       1 server_others.go:152] "Using iptables Proxier"
	I1027 23:23:05.869336       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 23:23:05.869344       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 23:23:05.869375       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 23:23:05.869669       1 server.go:846] "Version info" version="v1.28.0"
	I1027 23:23:05.869679       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:23:05.870718       1 config.go:188] "Starting service config controller"
	I1027 23:23:05.873158       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 23:23:05.873200       1 config.go:97] "Starting endpoint slice config controller"
	I1027 23:23:05.873206       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 23:23:05.873715       1 config.go:315] "Starting node config controller"
	I1027 23:23:05.874958       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 23:23:05.974018       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 23:23:05.974064       1 shared_informer.go:318] Caches are synced for service config
	I1027 23:23:05.975246       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7ea7069a9ac048e3540305fdcf01d1d935d128873256f10d6f0f8f9ae7cd0511] <==
	W1027 23:22:50.238238       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1027 23:22:50.238344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1027 23:22:50.238616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1027 23:22:50.238665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1027 23:22:50.238923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1027 23:22:50.238972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1027 23:22:50.239053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1027 23:22:50.239089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1027 23:22:50.239155       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1027 23:22:50.239191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1027 23:22:50.239260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1027 23:22:50.239300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1027 23:22:50.239367       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1027 23:22:50.239418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1027 23:22:50.239509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1027 23:22:50.239547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1027 23:22:50.239634       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1027 23:22:50.239668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1027 23:22:50.239735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1027 23:22:50.240169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1027 23:22:50.240269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1027 23:22:50.240305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1027 23:22:50.240359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1027 23:22:50.240404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1027 23:22:51.425162       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 23:23:05 old-k8s-version-477179 kubelet[1382]: I1027 23:23:05.349861    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2xc\" (UniqueName: \"kubernetes.io/projected/3b032e58-90ac-4c80-95f1-1d1fcb2b96f3-kube-api-access-ms2xc\") pod \"kindnet-z26d6\" (UID: \"3b032e58-90ac-4c80-95f1-1d1fcb2b96f3\") " pod="kube-system/kindnet-z26d6"
	Oct 27 23:23:05 old-k8s-version-477179 kubelet[1382]: I1027 23:23:05.349899    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2953b030-a25c-4882-9fab-7361700ee9ec-lib-modules\") pod \"kube-proxy-t6hvl\" (UID: \"2953b030-a25c-4882-9fab-7361700ee9ec\") " pod="kube-system/kube-proxy-t6hvl"
	Oct 27 23:23:05 old-k8s-version-477179 kubelet[1382]: I1027 23:23:05.349923    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstdb\" (UniqueName: \"kubernetes.io/projected/2953b030-a25c-4882-9fab-7361700ee9ec-kube-api-access-mstdb\") pod \"kube-proxy-t6hvl\" (UID: \"2953b030-a25c-4882-9fab-7361700ee9ec\") " pod="kube-system/kube-proxy-t6hvl"
	Oct 27 23:23:05 old-k8s-version-477179 kubelet[1382]: I1027 23:23:05.349951    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b032e58-90ac-4c80-95f1-1d1fcb2b96f3-lib-modules\") pod \"kindnet-z26d6\" (UID: \"3b032e58-90ac-4c80-95f1-1d1fcb2b96f3\") " pod="kube-system/kindnet-z26d6"
	Oct 27 23:23:05 old-k8s-version-477179 kubelet[1382]: I1027 23:23:05.349974    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2953b030-a25c-4882-9fab-7361700ee9ec-xtables-lock\") pod \"kube-proxy-t6hvl\" (UID: \"2953b030-a25c-4882-9fab-7361700ee9ec\") " pod="kube-system/kube-proxy-t6hvl"
	Oct 27 23:23:05 old-k8s-version-477179 kubelet[1382]: I1027 23:23:05.349999    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b032e58-90ac-4c80-95f1-1d1fcb2b96f3-xtables-lock\") pod \"kindnet-z26d6\" (UID: \"3b032e58-90ac-4c80-95f1-1d1fcb2b96f3\") " pod="kube-system/kindnet-z26d6"
	Oct 27 23:23:10 old-k8s-version-477179 kubelet[1382]: I1027 23:23:10.079839    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-z26d6" podStartSLOduration=1.347637037 podCreationTimestamp="2025-10-27 23:23:05 +0000 UTC" firstStartedPulling="2025-10-27 23:23:05.634046315 +0000 UTC m=+12.937128113" lastFinishedPulling="2025-10-27 23:23:09.366189595 +0000 UTC m=+16.669271393" observedRunningTime="2025-10-27 23:23:10.079646227 +0000 UTC m=+17.382728025" watchObservedRunningTime="2025-10-27 23:23:10.079780317 +0000 UTC m=+17.382862123"
	Oct 27 23:23:10 old-k8s-version-477179 kubelet[1382]: I1027 23:23:10.080579    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-t6hvl" podStartSLOduration=5.080529718 podCreationTimestamp="2025-10-27 23:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:23:06.057138535 +0000 UTC m=+13.360220341" watchObservedRunningTime="2025-10-27 23:23:10.080529718 +0000 UTC m=+17.383611516"
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: I1027 23:23:19.880020    1382 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: I1027 23:23:19.924490    1382 topology_manager.go:215] "Topology Admit Handler" podUID="cbfbf2cd-d56e-4b50-80d3-178ee16d8c54" podNamespace="kube-system" podName="storage-provisioner"
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: I1027 23:23:19.927542    1382 topology_manager.go:215] "Topology Admit Handler" podUID="da1efa5b-0929-4757-a96a-7b030212b09b" podNamespace="kube-system" podName="coredns-5dd5756b68-zmrh9"
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: W1027 23:23:19.934668    1382 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-477179" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-477179' and this object
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: E1027 23:23:19.934744    1382 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-477179" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-477179' and this object
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: I1027 23:23:19.956057    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cbfbf2cd-d56e-4b50-80d3-178ee16d8c54-tmp\") pod \"storage-provisioner\" (UID: \"cbfbf2cd-d56e-4b50-80d3-178ee16d8c54\") " pod="kube-system/storage-provisioner"
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: I1027 23:23:19.956126    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6w8\" (UniqueName: \"kubernetes.io/projected/cbfbf2cd-d56e-4b50-80d3-178ee16d8c54-kube-api-access-6v6w8\") pod \"storage-provisioner\" (UID: \"cbfbf2cd-d56e-4b50-80d3-178ee16d8c54\") " pod="kube-system/storage-provisioner"
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: I1027 23:23:19.956155    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da1efa5b-0929-4757-a96a-7b030212b09b-config-volume\") pod \"coredns-5dd5756b68-zmrh9\" (UID: \"da1efa5b-0929-4757-a96a-7b030212b09b\") " pod="kube-system/coredns-5dd5756b68-zmrh9"
	Oct 27 23:23:19 old-k8s-version-477179 kubelet[1382]: I1027 23:23:19.956182    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phfkf\" (UniqueName: \"kubernetes.io/projected/da1efa5b-0929-4757-a96a-7b030212b09b-kube-api-access-phfkf\") pod \"coredns-5dd5756b68-zmrh9\" (UID: \"da1efa5b-0929-4757-a96a-7b030212b09b\") " pod="kube-system/coredns-5dd5756b68-zmrh9"
	Oct 27 23:23:21 old-k8s-version-477179 kubelet[1382]: E1027 23:23:21.057355    1382 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Oct 27 23:23:21 old-k8s-version-477179 kubelet[1382]: E1027 23:23:21.057946    1382 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da1efa5b-0929-4757-a96a-7b030212b09b-config-volume podName:da1efa5b-0929-4757-a96a-7b030212b09b nodeName:}" failed. No retries permitted until 2025-10-27 23:23:21.557917914 +0000 UTC m=+28.860999712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/da1efa5b-0929-4757-a96a-7b030212b09b-config-volume") pod "coredns-5dd5756b68-zmrh9" (UID: "da1efa5b-0929-4757-a96a-7b030212b09b") : failed to sync configmap cache: timed out waiting for the condition
	Oct 27 23:23:21 old-k8s-version-477179 kubelet[1382]: W1027 23:23:21.752658    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/crio-9065e9b8d7f6131b8bcd060137a1c251aa04614d34dcd3cbbefbe6f8dcb3fe3a WatchSource:0}: Error finding container 9065e9b8d7f6131b8bcd060137a1c251aa04614d34dcd3cbbefbe6f8dcb3fe3a: Status 404 returned error can't find the container with id 9065e9b8d7f6131b8bcd060137a1c251aa04614d34dcd3cbbefbe6f8dcb3fe3a
	Oct 27 23:23:22 old-k8s-version-477179 kubelet[1382]: I1027 23:23:22.113804    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.113747466 podCreationTimestamp="2025-10-27 23:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:23:21.108395988 +0000 UTC m=+28.411477785" watchObservedRunningTime="2025-10-27 23:23:22.113747466 +0000 UTC m=+29.416829264"
	Oct 27 23:23:22 old-k8s-version-477179 kubelet[1382]: I1027 23:23:22.134530    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zmrh9" podStartSLOduration=17.134486985 podCreationTimestamp="2025-10-27 23:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:23:22.114862813 +0000 UTC m=+29.417944619" watchObservedRunningTime="2025-10-27 23:23:22.134486985 +0000 UTC m=+29.437568791"
	Oct 27 23:23:24 old-k8s-version-477179 kubelet[1382]: I1027 23:23:24.240820    1382 topology_manager.go:215] "Topology Admit Handler" podUID="d61db7c2-37e3-45dd-a444-eb086de138ff" podNamespace="default" podName="busybox"
	Oct 27 23:23:24 old-k8s-version-477179 kubelet[1382]: I1027 23:23:24.290295    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4t9k\" (UniqueName: \"kubernetes.io/projected/d61db7c2-37e3-45dd-a444-eb086de138ff-kube-api-access-v4t9k\") pod \"busybox\" (UID: \"d61db7c2-37e3-45dd-a444-eb086de138ff\") " pod="default/busybox"
	Oct 27 23:23:24 old-k8s-version-477179 kubelet[1382]: W1027 23:23:24.571922    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/crio-7e71329f0ce97ab3a82ddaf9eb06aaffd34fb0e3af2de19da88c1b5d9c7cca33 WatchSource:0}: Error finding container 7e71329f0ce97ab3a82ddaf9eb06aaffd34fb0e3af2de19da88c1b5d9c7cca33: Status 404 returned error can't find the container with id 7e71329f0ce97ab3a82ddaf9eb06aaffd34fb0e3af2de19da88c1b5d9c7cca33
	
	
	==> storage-provisioner [554268ad8a359795287f256a05747629f2d3d8c108f7eb744632f88b50ab5994] <==
	I1027 23:23:20.290493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:23:20.306506       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:23:20.306701       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 23:23:20.314503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:23:20.314698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-477179_d4267e5c-308b-4f2f-8b43-97c9cef27eb9!
	I1027 23:23:20.315731       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60ebd7b9-9b45-4373-8eb9-0ab942bf1b51", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-477179_d4267e5c-308b-4f2f-8b43-97c9cef27eb9 became leader
	I1027 23:23:20.415807       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-477179_d4267e5c-308b-4f2f-8b43-97c9cef27eb9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477179 -n old-k8s-version-477179
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-477179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-477179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-477179 --alsologtostderr -v=1: exit status 80 (1.984802302s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-477179 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:24:52.963062 1361193 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:24:52.964037 1361193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:24:52.964089 1361193 out.go:374] Setting ErrFile to fd 2...
	I1027 23:24:52.964112 1361193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:24:52.974203 1361193 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:24:52.974638 1361193 out.go:368] Setting JSON to false
	I1027 23:24:52.974697 1361193 mustload.go:66] Loading cluster: old-k8s-version-477179
	I1027 23:24:52.975147 1361193 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:24:52.975658 1361193 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:24:52.997862 1361193 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:24:52.998245 1361193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:24:53.073568 1361193 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 23:24:53.063777838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:24:53.074567 1361193 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-477179 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 23:24:53.079780 1361193 out.go:179] * Pausing node old-k8s-version-477179 ... 
	I1027 23:24:53.082779 1361193 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:24:53.083192 1361193 ssh_runner.go:195] Run: systemctl --version
	I1027 23:24:53.083244 1361193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:24:53.100911 1361193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:24:53.205206 1361193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:53.235073 1361193 pause.go:52] kubelet running: true
	I1027 23:24:53.235229 1361193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:24:53.474119 1361193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:24:53.474275 1361193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:24:53.563252 1361193 cri.go:89] found id: "9cda4094bfed5a639c35f0a169fc39a8317d45025263f0528ba134c879485b25"
	I1027 23:24:53.563279 1361193 cri.go:89] found id: "266c1e8038479147b3192edbb4966e537d86784dad76d9a4aa532c21689fc44c"
	I1027 23:24:53.563284 1361193 cri.go:89] found id: "8dd45d72c479651ba09d2be7f8a62f2c5eb7ccd81bf397242248fd631ff5c1e2"
	I1027 23:24:53.563288 1361193 cri.go:89] found id: "f6678a4bfdea01a536baa38f2f64d3a12a42d128714d4a3edd59407299000596"
	I1027 23:24:53.563292 1361193 cri.go:89] found id: "2aab2984cba3a6ac659a5293f3fc709521e8bf4e3e62a456804c373f3774d3f5"
	I1027 23:24:53.563296 1361193 cri.go:89] found id: "31d2036be45f7a86c828442bcf45019e9bddf4f8b4f0001aa49eaad623860144"
	I1027 23:24:53.563299 1361193 cri.go:89] found id: "4cc4ea0f92239fc9155b151efab480bb22dbf8b3551f7c315daae1493853f27f"
	I1027 23:24:53.563302 1361193 cri.go:89] found id: "4df94ad74d55d5841a5ebd671ae3a091cbc30efa3d08697d8baed42fd415cbf1"
	I1027 23:24:53.563305 1361193 cri.go:89] found id: "0daf78b0c28b92f6f69bc82b09d8267753a05593afe602cb3abe6fd2fe226dd4"
	I1027 23:24:53.563312 1361193 cri.go:89] found id: "09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386"
	I1027 23:24:53.563316 1361193 cri.go:89] found id: "76f54d3dbd7fd7c913b3758a5fcab315050789c5914aa4cdea07154989d5e5c1"
	I1027 23:24:53.563320 1361193 cri.go:89] found id: ""
	I1027 23:24:53.563370 1361193 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:24:53.574937 1361193 retry.go:31] will retry after 156.241991ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:24:53Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:24:53.731319 1361193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:53.744927 1361193 pause.go:52] kubelet running: false
	I1027 23:24:53.745014 1361193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:24:53.932433 1361193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:24:53.932532 1361193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:24:54.033586 1361193 cri.go:89] found id: "9cda4094bfed5a639c35f0a169fc39a8317d45025263f0528ba134c879485b25"
	I1027 23:24:54.033614 1361193 cri.go:89] found id: "266c1e8038479147b3192edbb4966e537d86784dad76d9a4aa532c21689fc44c"
	I1027 23:24:54.033620 1361193 cri.go:89] found id: "8dd45d72c479651ba09d2be7f8a62f2c5eb7ccd81bf397242248fd631ff5c1e2"
	I1027 23:24:54.033624 1361193 cri.go:89] found id: "f6678a4bfdea01a536baa38f2f64d3a12a42d128714d4a3edd59407299000596"
	I1027 23:24:54.033628 1361193 cri.go:89] found id: "2aab2984cba3a6ac659a5293f3fc709521e8bf4e3e62a456804c373f3774d3f5"
	I1027 23:24:54.033632 1361193 cri.go:89] found id: "31d2036be45f7a86c828442bcf45019e9bddf4f8b4f0001aa49eaad623860144"
	I1027 23:24:54.033635 1361193 cri.go:89] found id: "4cc4ea0f92239fc9155b151efab480bb22dbf8b3551f7c315daae1493853f27f"
	I1027 23:24:54.033638 1361193 cri.go:89] found id: "4df94ad74d55d5841a5ebd671ae3a091cbc30efa3d08697d8baed42fd415cbf1"
	I1027 23:24:54.033641 1361193 cri.go:89] found id: "0daf78b0c28b92f6f69bc82b09d8267753a05593afe602cb3abe6fd2fe226dd4"
	I1027 23:24:54.033648 1361193 cri.go:89] found id: "09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386"
	I1027 23:24:54.033651 1361193 cri.go:89] found id: "76f54d3dbd7fd7c913b3758a5fcab315050789c5914aa4cdea07154989d5e5c1"
	I1027 23:24:54.033654 1361193 cri.go:89] found id: ""
	I1027 23:24:54.033717 1361193 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:24:54.046653 1361193 retry.go:31] will retry after 561.98305ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:24:54Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:24:54.609421 1361193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:54.622771 1361193 pause.go:52] kubelet running: false
	I1027 23:24:54.622872 1361193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:24:54.798429 1361193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:24:54.798523 1361193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:24:54.865907 1361193 cri.go:89] found id: "9cda4094bfed5a639c35f0a169fc39a8317d45025263f0528ba134c879485b25"
	I1027 23:24:54.865930 1361193 cri.go:89] found id: "266c1e8038479147b3192edbb4966e537d86784dad76d9a4aa532c21689fc44c"
	I1027 23:24:54.865936 1361193 cri.go:89] found id: "8dd45d72c479651ba09d2be7f8a62f2c5eb7ccd81bf397242248fd631ff5c1e2"
	I1027 23:24:54.865940 1361193 cri.go:89] found id: "f6678a4bfdea01a536baa38f2f64d3a12a42d128714d4a3edd59407299000596"
	I1027 23:24:54.865943 1361193 cri.go:89] found id: "2aab2984cba3a6ac659a5293f3fc709521e8bf4e3e62a456804c373f3774d3f5"
	I1027 23:24:54.865960 1361193 cri.go:89] found id: "31d2036be45f7a86c828442bcf45019e9bddf4f8b4f0001aa49eaad623860144"
	I1027 23:24:54.865963 1361193 cri.go:89] found id: "4cc4ea0f92239fc9155b151efab480bb22dbf8b3551f7c315daae1493853f27f"
	I1027 23:24:54.865966 1361193 cri.go:89] found id: "4df94ad74d55d5841a5ebd671ae3a091cbc30efa3d08697d8baed42fd415cbf1"
	I1027 23:24:54.865970 1361193 cri.go:89] found id: "0daf78b0c28b92f6f69bc82b09d8267753a05593afe602cb3abe6fd2fe226dd4"
	I1027 23:24:54.865980 1361193 cri.go:89] found id: "09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386"
	I1027 23:24:54.865986 1361193 cri.go:89] found id: "76f54d3dbd7fd7c913b3758a5fcab315050789c5914aa4cdea07154989d5e5c1"
	I1027 23:24:54.865990 1361193 cri.go:89] found id: ""
	I1027 23:24:54.866039 1361193 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:24:54.880662 1361193 out.go:203] 
	W1027 23:24:54.883654 1361193 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:24:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:24:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 23:24:54.883729 1361193 out.go:285] * 
	* 
	W1027 23:24:54.894157 1361193 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:24:54.897286 1361193 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-477179 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-477179
helpers_test.go:243: (dbg) docker inspect old-k8s-version-477179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373",
	        "Created": "2025-10-27T23:22:26.560712085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1357468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:23:49.686109518Z",
	            "FinishedAt": "2025-10-27T23:23:48.691324951Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/hostname",
	        "HostsPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/hosts",
	        "LogPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373-json.log",
	        "Name": "/old-k8s-version-477179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-477179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-477179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373",
	                "LowerDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-477179",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-477179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-477179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-477179",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-477179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63de14d2944c7bce7a5ea4094457e376b4b063c2f7f06143ff37bd59f1016daa",
	            "SandboxKey": "/var/run/docker/netns/63de14d2944c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34570"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34573"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34571"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34572"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-477179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:fd:3d:de:0d:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70c91a2d56ea508083256c63182c2c3e1ef772ce7bb88e6562d5b5aa2b7beeaf",
	                    "EndpointID": "1418147bf4af69a6ecf9086788999d087e7479730e07462d7aafdeab78ca7332",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-477179",
	                        "431f1160e1d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179: exit status 2 (491.693629ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-477179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-477179 logs -n 25: (1.461452904s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-440075 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo docker system info                                                                                                                                                                                                      │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-477179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo containerd config dump                                                                                                                                                                                                  │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ stop    │ -p old-k8s-version-477179 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crio config                                                                                                                                                                                                             │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ delete  │ -p bridge-440075                                                                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:23:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:23:49.343502 1357280 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:23:49.343614 1357280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:23:49.343620 1357280 out.go:374] Setting ErrFile to fd 2...
	I1027 23:23:49.343624 1357280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:23:49.343865 1357280 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:23:49.344241 1357280 out.go:368] Setting JSON to false
	I1027 23:23:49.345088 1357280 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21979,"bootTime":1761585451,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:23:49.345164 1357280 start.go:143] virtualization:  
	I1027 23:23:49.348574 1357280 out.go:179] * [old-k8s-version-477179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:23:49.352436 1357280 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:23:49.352563 1357280 notify.go:221] Checking for updates...
	I1027 23:23:49.358460 1357280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:23:49.361369 1357280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:23:49.364172 1357280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:23:49.366918 1357280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:23:49.369750 1357280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:23:49.373172 1357280 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:23:49.376526 1357280 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1027 23:23:49.379402 1357280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:23:49.419764 1357280 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:23:49.419864 1357280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:23:49.497624 1357280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-27 23:23:49.488329915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:23:49.497736 1357280 docker.go:318] overlay module found
	I1027 23:23:49.501507 1357280 out.go:179] * Using the docker driver based on existing profile
	I1027 23:23:46.473847 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:23:46.473879 1355720 ubuntu.go:182] provisioning hostname "no-preload-947754"
	I1027 23:23:46.473947 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:46.490222 1355720 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:46.490573 1355720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34564 <nil> <nil>}
	I1027 23:23:46.490593 1355720 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-947754 && echo "no-preload-947754" | sudo tee /etc/hostname
	I1027 23:23:46.647386 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:23:46.647521 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:46.664449 1355720 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:46.664752 1355720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34564 <nil> <nil>}
	I1027 23:23:46.664774 1355720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-947754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-947754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-947754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:23:46.814627 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:23:46.814658 1355720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:23:46.814687 1355720 ubuntu.go:190] setting up certificates
	I1027 23:23:46.814697 1355720 provision.go:84] configureAuth start
	I1027 23:23:46.814758 1355720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:23:46.831711 1355720 provision.go:143] copyHostCerts
	I1027 23:23:46.831779 1355720 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:23:46.831794 1355720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:23:46.831876 1355720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:23:46.831970 1355720 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:23:46.831979 1355720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:23:46.832004 1355720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:23:46.832087 1355720 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:23:46.832098 1355720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:23:46.832122 1355720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:23:46.832181 1355720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.no-preload-947754 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-947754]
	I1027 23:23:47.157243 1355720 provision.go:177] copyRemoteCerts
	I1027 23:23:47.157313 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:23:47.157369 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.176200 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.282333 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:23:47.299962 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:23:47.317742 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:23:47.335119 1355720 provision.go:87] duration metric: took 520.399235ms to configureAuth
	I1027 23:23:47.335152 1355720 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:23:47.335350 1355720 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:23:47.335459 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.356760 1355720 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:47.357076 1355720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34564 <nil> <nil>}
	I1027 23:23:47.357092 1355720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:23:47.615571 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:23:47.615637 1355720 machine.go:97] duration metric: took 4.308645977s to provisionDockerMachine
	I1027 23:23:47.615666 1355720 client.go:176] duration metric: took 6.600769648s to LocalClient.Create
	I1027 23:23:47.615703 1355720 start.go:167] duration metric: took 6.60085929s to libmachine.API.Create "no-preload-947754"
	I1027 23:23:47.615723 1355720 start.go:293] postStartSetup for "no-preload-947754" (driver="docker")
	I1027 23:23:47.615775 1355720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:23:47.615857 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:23:47.615936 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.634115 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.738627 1355720 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:23:47.741837 1355720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:23:47.741884 1355720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:23:47.741896 1355720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:23:47.741954 1355720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:23:47.742059 1355720 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:23:47.742166 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:23:47.749574 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:23:47.766533 1355720 start.go:296] duration metric: took 150.780907ms for postStartSetup
	I1027 23:23:47.766886 1355720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:23:47.783410 1355720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/config.json ...
	I1027 23:23:47.783688 1355720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:23:47.783739 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.799803 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.899210 1355720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:23:47.903614 1355720 start.go:128] duration metric: took 6.894564937s to createHost
	I1027 23:23:47.903674 1355720 start.go:83] releasing machines lock for "no-preload-947754", held for 6.894725357s
	I1027 23:23:47.903762 1355720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:23:47.920221 1355720 ssh_runner.go:195] Run: cat /version.json
	I1027 23:23:47.920274 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.920511 1355720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:23:47.920565 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.942091 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.952162 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:48.046462 1355720 ssh_runner.go:195] Run: systemctl --version
	I1027 23:23:48.144559 1355720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:23:48.178095 1355720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:23:48.182509 1355720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:23:48.182605 1355720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:23:48.211189 1355720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 23:23:48.211227 1355720 start.go:496] detecting cgroup driver to use...
	I1027 23:23:48.211259 1355720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:23:48.211320 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:23:48.227946 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:23:48.240798 1355720 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:23:48.240863 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:23:48.258350 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:23:48.276829 1355720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:23:48.395460 1355720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:23:48.521808 1355720 docker.go:234] disabling docker service ...
	I1027 23:23:48.521899 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:23:48.545358 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:23:48.559265 1355720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:23:48.699059 1355720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:23:48.887597 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:23:48.913808 1355720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:23:48.928856 1355720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:23:48.928936 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.945902 1355720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:23:48.945986 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.956109 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.966845 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.983374 1355720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:23:49.027241 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:49.038627 1355720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:49.058231 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:49.069839 1355720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:23:49.077946 1355720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:23:49.085725 1355720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:49.236177 1355720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:23:49.394971 1355720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:23:49.395052 1355720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:23:49.403157 1355720 start.go:564] Will wait 60s for crictl version
	I1027 23:23:49.403227 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:49.410205 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:23:49.461289 1355720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:23:49.461382 1355720 ssh_runner.go:195] Run: crio --version
	I1027 23:23:49.500021 1355720 ssh_runner.go:195] Run: crio --version
	I1027 23:23:49.557119 1355720 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:23:49.504387 1357280 start.go:307] selected driver: docker
	I1027 23:23:49.504412 1357280 start.go:928] validating driver "docker" against &{Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:23:49.504531 1357280 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:23:49.505236 1357280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:23:49.587772 1357280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-27 23:23:49.578143773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:23:49.588125 1357280 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:23:49.588159 1357280 cni.go:84] Creating CNI manager for ""
	I1027 23:23:49.588211 1357280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:23:49.588256 1357280 start.go:351] cluster config:
	{Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:23:49.591802 1357280 out.go:179] * Starting "old-k8s-version-477179" primary control-plane node in "old-k8s-version-477179" cluster
	I1027 23:23:49.594749 1357280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:23:49.597783 1357280 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:23:49.600633 1357280 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 23:23:49.600687 1357280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1027 23:23:49.600713 1357280 cache.go:59] Caching tarball of preloaded images
	I1027 23:23:49.600792 1357280 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:23:49.600800 1357280 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 23:23:49.600906 1357280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/config.json ...
	I1027 23:23:49.601116 1357280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:23:49.624743 1357280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:23:49.624773 1357280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:23:49.624787 1357280 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:23:49.624815 1357280 start.go:360] acquireMachinesLock for old-k8s-version-477179: {Name:mka53febc0a54f4faa3bdae2e66b439a96a1b896 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:23:49.624891 1357280 start.go:364] duration metric: took 33.223µs to acquireMachinesLock for "old-k8s-version-477179"
	I1027 23:23:49.624914 1357280 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:23:49.624919 1357280 fix.go:55] fixHost starting: 
	I1027 23:23:49.625178 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:49.650118 1357280 fix.go:113] recreateIfNeeded on old-k8s-version-477179: state=Stopped err=<nil>
	W1027 23:23:49.650150 1357280 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:23:49.560033 1355720 cli_runner.go:164] Run: docker network inspect no-preload-947754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:23:49.592045 1355720 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 23:23:49.596061 1355720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:23:49.607312 1355720 kubeadm.go:884] updating cluster {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:23:49.607425 1355720 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:23:49.607468 1355720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:23:49.641704 1355720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 23:23:49.641732 1355720 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1027 23:23:49.641791 1355720 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:49.641797 1355720 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:49.641889 1355720 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:49.642126 1355720 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 23:23:49.642182 1355720 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.642339 1355720 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:49.642424 1355720 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:49.642599 1355720 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:49.643420 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:49.643964 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:49.644734 1355720 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:49.645120 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.645316 1355720 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:49.645478 1355720 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 23:23:49.645629 1355720 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:49.646478 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:49.876926 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:49.877581 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:49.886274 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1027 23:23:49.886805 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:49.887080 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:49.890308 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.897762 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.155203 1355720 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1027 23:23:50.155253 1355720 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.155302 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.155389 1355720 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1027 23:23:50.155411 1355720 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.155439 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.183389 1355720 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1027 23:23:50.183427 1355720 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.183476 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.183539 1355720 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1027 23:23:50.183552 1355720 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1027 23:23:50.183573 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.183616 1355720 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1027 23:23:50.183629 1355720 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.183649 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.192729 1355720 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1027 23:23:50.192768 1355720 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.192819 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.192870 1355720 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1027 23:23:50.192882 1355720 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:50.192906 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.192984 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.193033 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.193079 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.202542 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 23:23:50.202955 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.349093 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.349163 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.349202 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:50.349261 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.349313 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.367212 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 23:23:50.367293 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.541521 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.541639 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.541705 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.541740 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:50.541771 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.556178 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.556337 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 23:23:50.737191 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 23:23:50.737282 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 23:23:50.737554 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 23:23:50.737301 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1027 23:23:50.737556 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 23:23:50.737432 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1027 23:23:50.737702 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1027 23:23:50.737738 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1027 23:23:50.737450 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 23:23:50.737793 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1027 23:23:50.737406 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.737454 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.653573 1357280 out.go:252] * Restarting existing docker container for "old-k8s-version-477179" ...
	I1027 23:23:49.653670 1357280 cli_runner.go:164] Run: docker start old-k8s-version-477179
	I1027 23:23:49.948522 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:49.978345 1357280 kic.go:430] container "old-k8s-version-477179" state is running.
	I1027 23:23:49.978784 1357280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:23:50.016560 1357280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/config.json ...
	I1027 23:23:50.016832 1357280 machine.go:94] provisionDockerMachine start ...
	I1027 23:23:50.016921 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:50.049949 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:50.050285 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:50.050303 1357280 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:23:50.051175 1357280 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42288->127.0.0.1:34569: read: connection reset by peer
	I1027 23:23:53.214267 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-477179
	
	I1027 23:23:53.214302 1357280 ubuntu.go:182] provisioning hostname "old-k8s-version-477179"
	I1027 23:23:53.214365 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:53.236935 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:53.237250 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:53.237270 1357280 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-477179 && echo "old-k8s-version-477179" | sudo tee /etc/hostname
	I1027 23:23:53.412204 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-477179
	
	I1027 23:23:53.412307 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:53.439083 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:53.439393 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:53.439417 1357280 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-477179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-477179/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-477179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:23:53.599197 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:23:53.599276 1357280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:23:53.599312 1357280 ubuntu.go:190] setting up certificates
	I1027 23:23:53.599349 1357280 provision.go:84] configureAuth start
	I1027 23:23:53.599457 1357280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:23:53.621663 1357280 provision.go:143] copyHostCerts
	I1027 23:23:53.621740 1357280 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:23:53.621755 1357280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:23:53.621830 1357280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:23:53.621944 1357280 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:23:53.621950 1357280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:23:53.621977 1357280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:23:53.622049 1357280 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:23:53.622054 1357280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:23:53.622078 1357280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:23:53.622134 1357280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-477179 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-477179]
	I1027 23:23:53.937063 1357280 provision.go:177] copyRemoteCerts
	I1027 23:23:53.937187 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:23:53.937271 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:53.955807 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.063991 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:23:54.093343 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1027 23:23:54.118112 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:23:54.139377 1357280 provision.go:87] duration metric: took 539.988459ms to configureAuth
	I1027 23:23:54.139445 1357280 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:23:54.139666 1357280 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:23:54.139813 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.158334 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:54.158661 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:54.158677 1357280 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:23:50.784971 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 23:23:50.785141 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 23:23:50.785228 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1027 23:23:50.785286 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1027 23:23:50.785372 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1027 23:23:50.785474 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1027 23:23:50.785579 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1027 23:23:50.785613 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1027 23:23:50.785698 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1027 23:23:50.785729 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1027 23:23:50.785805 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1027 23:23:50.785834 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1027 23:23:50.785949 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 23:23:50.786039 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 23:23:50.841352 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1027 23:23:50.841385 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1027 23:23:50.841440 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1027 23:23:50.841453 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1027 23:23:50.871091 1355720 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1027 23:23:50.871210 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1027 23:23:51.105665 1355720 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1027 23:23:51.105942 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:51.269027 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1027 23:23:51.347155 1355720 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1027 23:23:51.347520 1355720 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:51.347602 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:51.358911 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 23:23:51.359026 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 23:23:51.417696 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:53.404713 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.045634467s)
	I1027 23:23:53.404737 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1027 23:23:53.404742 1355720 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.987014306s)
	I1027 23:23:53.404756 1355720 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1027 23:23:53.404803 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:53.404803 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1027 23:23:55.658586 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.253703673s)
	I1027 23:23:55.658610 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1027 23:23:55.658629 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 23:23:55.658675 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 23:23:55.658699 1355720 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.25387156s)
	I1027 23:23:55.658765 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:54.546224 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:23:54.546293 1357280 machine.go:97] duration metric: took 4.529438507s to provisionDockerMachine
	I1027 23:23:54.546322 1357280 start.go:293] postStartSetup for "old-k8s-version-477179" (driver="docker")
	I1027 23:23:54.546366 1357280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:23:54.546476 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:23:54.546576 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.574167 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.679726 1357280 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:23:54.685927 1357280 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:23:54.685958 1357280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:23:54.685969 1357280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:23:54.686023 1357280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:23:54.686118 1357280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:23:54.686221 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:23:54.694922 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:23:54.719403 1357280 start.go:296] duration metric: took 173.048882ms for postStartSetup
	I1027 23:23:54.719489 1357280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:23:54.719562 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.745408 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.862023 1357280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:23:54.867796 1357280 fix.go:57] duration metric: took 5.242868765s for fixHost
	I1027 23:23:54.867825 1357280 start.go:83] releasing machines lock for "old-k8s-version-477179", held for 5.242923338s
	I1027 23:23:54.867897 1357280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:23:54.898735 1357280 ssh_runner.go:195] Run: cat /version.json
	I1027 23:23:54.898796 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.899030 1357280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:23:54.899093 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.935512 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.945178 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:55.164577 1357280 ssh_runner.go:195] Run: systemctl --version
	I1027 23:23:55.171829 1357280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:23:55.218305 1357280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:23:55.224476 1357280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:23:55.224550 1357280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:23:55.234467 1357280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:23:55.234532 1357280 start.go:496] detecting cgroup driver to use...
	I1027 23:23:55.234582 1357280 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:23:55.234653 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:23:55.251300 1357280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:23:55.266121 1357280 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:23:55.266230 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:23:55.283359 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:23:55.297861 1357280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:23:55.450015 1357280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:23:55.605338 1357280 docker.go:234] disabling docker service ...
	I1027 23:23:55.605463 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:23:55.622884 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:23:55.642699 1357280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:23:55.801794 1357280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:23:55.950604 1357280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:23:55.965723 1357280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:23:55.980950 1357280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1027 23:23:55.981047 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:55.990499 1357280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:23:55.990594 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.000224 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.011612 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.022156 1357280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:23:56.032258 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.042887 1357280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.052588 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.062861 1357280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:23:56.072071 1357280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:23:56.081184 1357280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:56.227637 1357280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:23:56.628927 1357280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:23:56.629040 1357280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:23:56.633326 1357280 start.go:564] Will wait 60s for crictl version
	I1027 23:23:56.633420 1357280 ssh_runner.go:195] Run: which crictl
	I1027 23:23:56.638681 1357280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:23:56.676931 1357280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:23:56.677038 1357280 ssh_runner.go:195] Run: crio --version
	I1027 23:23:56.743293 1357280 ssh_runner.go:195] Run: crio --version
	I1027 23:23:56.783878 1357280 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1027 23:23:56.787228 1357280 cli_runner.go:164] Run: docker network inspect old-k8s-version-477179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:23:56.809610 1357280 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:23:56.814099 1357280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:23:56.823930 1357280 kubeadm.go:884] updating cluster {Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:23:56.824060 1357280 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 23:23:56.824114 1357280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:23:56.864870 1357280 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:23:56.864970 1357280 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:23:56.865061 1357280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:23:56.894104 1357280 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:23:56.894125 1357280 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:23:56.894132 1357280 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1027 23:23:56.894242 1357280 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-477179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:23:56.894322 1357280 ssh_runner.go:195] Run: crio config
	I1027 23:23:56.971192 1357280 cni.go:84] Creating CNI manager for ""
	I1027 23:23:56.971261 1357280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:23:56.971301 1357280 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:23:56.971355 1357280 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-477179 NodeName:old-k8s-version-477179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:23:56.971544 1357280 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-477179"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:23:56.971633 1357280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1027 23:23:56.980463 1357280 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:23:56.980599 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:23:56.988769 1357280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1027 23:23:57.002021 1357280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:23:57.019263 1357280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1027 23:23:57.050008 1357280 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:23:57.054579 1357280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:23:57.067759 1357280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:57.246509 1357280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:23:57.267506 1357280 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179 for IP: 192.168.85.2
	I1027 23:23:57.267530 1357280 certs.go:195] generating shared ca certs ...
	I1027 23:23:57.267549 1357280 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:57.267720 1357280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:23:57.267775 1357280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:23:57.267787 1357280 certs.go:257] generating profile certs ...
	I1027 23:23:57.267893 1357280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.key
	I1027 23:23:57.267974 1357280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key.e54ee9ff
	I1027 23:23:57.268023 1357280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key
	I1027 23:23:57.268168 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:23:57.268212 1357280 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:23:57.268225 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:23:57.268250 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:23:57.268286 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:23:57.268312 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:23:57.268366 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:23:57.269056 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:23:57.298976 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:23:57.323852 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:23:57.356487 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:23:57.385476 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 23:23:57.419133 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:23:57.463366 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:23:57.512692 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:23:57.562032 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:23:57.596237 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:23:57.629615 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:23:57.672159 1357280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:23:57.689056 1357280 ssh_runner.go:195] Run: openssl version
	I1027 23:23:57.696102 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:23:57.706532 1357280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:23:57.711540 1357280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:23:57.711627 1357280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:23:57.755892 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:23:57.764990 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:23:57.775297 1357280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:23:57.779179 1357280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:23:57.779259 1357280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:23:57.825111 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:23:57.834033 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:23:57.843274 1357280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:23:57.847523 1357280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:23:57.847647 1357280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:23:57.895930 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:23:57.904694 1357280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:23:57.909986 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:23:57.952870 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:23:58.049123 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:23:58.143084 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:23:58.206668 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:23:58.310997 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:23:58.431912 1357280 kubeadm.go:401] StartCluster: {Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:23:58.432067 1357280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:23:58.432172 1357280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:23:58.558818 1357280 cri.go:89] found id: "31d2036be45f7a86c828442bcf45019e9bddf4f8b4f0001aa49eaad623860144"
	I1027 23:23:58.558881 1357280 cri.go:89] found id: "4cc4ea0f92239fc9155b151efab480bb22dbf8b3551f7c315daae1493853f27f"
	I1027 23:23:58.558901 1357280 cri.go:89] found id: "4df94ad74d55d5841a5ebd671ae3a091cbc30efa3d08697d8baed42fd415cbf1"
	I1027 23:23:58.558928 1357280 cri.go:89] found id: "0daf78b0c28b92f6f69bc82b09d8267753a05593afe602cb3abe6fd2fe226dd4"
	I1027 23:23:58.558961 1357280 cri.go:89] found id: ""
	I1027 23:23:58.559043 1357280 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:23:58.630166 1357280 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:23:58Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:23:58.630306 1357280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:23:58.658530 1357280 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:23:58.658593 1357280 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:23:58.658678 1357280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:23:58.701530 1357280 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:23:58.702014 1357280 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-477179" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:23:58.702167 1357280 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-477179" cluster setting kubeconfig missing "old-k8s-version-477179" context setting]
	I1027 23:23:58.702511 1357280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:58.704072 1357280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:23:58.729812 1357280 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:23:58.729898 1357280 kubeadm.go:602] duration metric: took 71.284011ms to restartPrimaryControlPlane
	I1027 23:23:58.729922 1357280 kubeadm.go:403] duration metric: took 298.022711ms to StartCluster
	I1027 23:23:58.729966 1357280 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:58.730046 1357280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:23:58.730687 1357280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:58.730945 1357280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:23:58.731363 1357280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:23:58.731435 1357280 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-477179"
	I1027 23:23:58.731448 1357280 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-477179"
	W1027 23:23:58.731454 1357280 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:23:58.731474 1357280 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:58.732095 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.732452 1357280 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:23:58.732602 1357280 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-477179"
	I1027 23:23:58.732638 1357280 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-477179"
	I1027 23:23:58.732723 1357280 addons.go:69] Setting dashboard=true in profile "old-k8s-version-477179"
	I1027 23:23:58.732735 1357280 addons.go:238] Setting addon dashboard=true in "old-k8s-version-477179"
	W1027 23:23:58.732741 1357280 addons.go:247] addon dashboard should already be in state true
	I1027 23:23:58.732783 1357280 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:58.733248 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.733652 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.735104 1357280 out.go:179] * Verifying Kubernetes components...
	I1027 23:23:58.738050 1357280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:58.785101 1357280 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:23:58.785110 1357280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:58.790483 1357280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:23:58.790508 1357280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:23:58.790574 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:58.793691 1357280 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:23:58.795606 1357280 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-477179"
	W1027 23:23:58.795628 1357280 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:23:58.795651 1357280 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:58.796781 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:23:58.796797 1357280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:23:58.796869 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:58.797379 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.845041 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:58.848705 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:58.858972 1357280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:23:58.858992 1357280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:23:58.859055 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:58.883877 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:59.196431 1357280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:23:59.243693 1357280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:23:57.633591 1355720 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.974806489s)
	I1027 23:23:57.633635 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1027 23:23:57.633682 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.97498108s)
	I1027 23:23:57.633704 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1027 23:23:57.633720 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1027 23:23:57.633727 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 23:23:57.633772 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 23:23:59.833991 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.20018977s)
	I1027 23:23:59.834015 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1027 23:23:59.834032 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 23:23:59.834077 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 23:23:59.834135 1355720 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.200406011s)
	I1027 23:23:59.834149 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1027 23:23:59.834163 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1027 23:23:59.401510 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:23:59.401585 1357280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:23:59.416039 1357280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:23:59.554900 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:23:59.554975 1357280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:23:59.653696 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:23:59.653771 1357280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:23:59.733256 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:23:59.733329 1357280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:23:59.788893 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:23:59.788970 1357280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:23:59.813322 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:23:59.813402 1357280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:23:59.860259 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:23:59.860342 1357280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:23:59.887899 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:23:59.887978 1357280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:23:59.926027 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:23:59.926101 1357280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:23:59.965000 1357280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:24:01.669636 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.835536362s)
	I1027 23:24:01.669708 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1027 23:24:01.669756 1355720 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1027 23:24:01.669839 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1027 23:24:09.304145 1357280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.107683297s)
	I1027 23:24:09.304469 1357280 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.060700642s)
	I1027 23:24:09.304498 1357280 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-477179" to be "Ready" ...
	I1027 23:24:06.736505 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.066619335s)
	I1027 23:24:06.736529 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1027 23:24:06.736547 1355720 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1027 23:24:06.736594 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1027 23:24:07.642312 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1027 23:24:07.642350 1355720 cache_images.go:125] Successfully loaded all cached images
	I1027 23:24:07.642356 1355720 cache_images.go:94] duration metric: took 18.000608839s to LoadCachedImages
	I1027 23:24:07.642367 1355720 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 23:24:07.642479 1355720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-947754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:24:07.642571 1355720 ssh_runner.go:195] Run: crio config
	I1027 23:24:07.731140 1355720 cni.go:84] Creating CNI manager for ""
	I1027 23:24:07.731166 1355720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:24:07.731188 1355720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:24:07.731214 1355720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-947754 NodeName:no-preload-947754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:24:07.731345 1355720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-947754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:24:07.731422 1355720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:24:07.745303 1355720 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1027 23:24:07.745364 1355720 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1027 23:24:07.761132 1355720 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1027 23:24:07.761223 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1027 23:24:07.762174 1355720 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1027 23:24:07.762745 1355720 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1027 23:24:07.772035 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1027 23:24:07.772072 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1027 23:24:08.833955 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:08.852088 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1027 23:24:08.859059 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1027 23:24:08.859092 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1027 23:24:09.206883 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1027 23:24:09.233294 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1027 23:24:09.233335 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1027 23:24:09.903349 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:24:09.914716 1355720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:24:09.956939 1355720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:24:09.986063 1355720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 23:24:10.020733 1355720 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:24:10.030200 1355720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:24:10.050701 1355720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:24:10.285815 1355720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:24:10.305548 1355720 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754 for IP: 192.168.76.2
	I1027 23:24:10.305622 1355720 certs.go:195] generating shared ca certs ...
	I1027 23:24:10.305653 1355720 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:10.305834 1355720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:24:10.305915 1355720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:24:10.305949 1355720 certs.go:257] generating profile certs ...
	I1027 23:24:10.306030 1355720 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key
	I1027 23:24:10.306069 1355720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.crt with IP's: []
	I1027 23:24:09.393523 1357280 node_ready.go:49] node "old-k8s-version-477179" is "Ready"
	I1027 23:24:09.393549 1357280 node_ready.go:38] duration metric: took 89.039618ms for node "old-k8s-version-477179" to be "Ready" ...
	I1027 23:24:09.393565 1357280 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:24:09.393625 1357280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:24:10.388025 1357280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.971909082s)
	I1027 23:24:10.973967 1357280 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.580321707s)
	I1027 23:24:10.973996 1357280 api_server.go:72] duration metric: took 12.243001739s to wait for apiserver process to appear ...
	I1027 23:24:10.974002 1357280 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:24:10.974021 1357280 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:24:10.974556 1357280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.00945109s)
	I1027 23:24:10.977938 1357280 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-477179 addons enable metrics-server
	
	I1027 23:24:10.980919 1357280 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1027 23:24:10.983911 1357280 addons.go:514] duration metric: took 12.252533024s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1027 23:24:10.994846 1357280 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:24:10.996771 1357280 api_server.go:141] control plane version: v1.28.0
	I1027 23:24:10.996794 1357280 api_server.go:131] duration metric: took 22.784781ms to wait for apiserver health ...
	I1027 23:24:10.996803 1357280 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:24:11.005153 1357280 system_pods.go:59] 8 kube-system pods found
	I1027 23:24:11.005200 1357280 system_pods.go:61] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:11.005212 1357280 system_pods.go:61] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:24:11.005219 1357280 system_pods.go:61] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:24:11.005227 1357280 system_pods.go:61] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:24:11.005235 1357280 system_pods.go:61] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:24:11.005243 1357280 system_pods.go:61] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:24:11.005253 1357280 system_pods.go:61] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:24:11.005265 1357280 system_pods.go:61] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Running
	I1027 23:24:11.005272 1357280 system_pods.go:74] duration metric: took 8.463348ms to wait for pod list to return data ...
	I1027 23:24:11.005286 1357280 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:24:11.008614 1357280 default_sa.go:45] found service account: "default"
	I1027 23:24:11.008642 1357280 default_sa.go:55] duration metric: took 3.34984ms for default service account to be created ...
	I1027 23:24:11.008653 1357280 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:24:11.013637 1357280 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:11.013672 1357280 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:11.013680 1357280 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:24:11.013687 1357280 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:24:11.013694 1357280 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:24:11.013700 1357280 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:24:11.013706 1357280 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:24:11.013712 1357280 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:24:11.013717 1357280 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Running
	I1027 23:24:11.013729 1357280 system_pods.go:126] duration metric: took 5.070332ms to wait for k8s-apps to be running ...
	I1027 23:24:11.013748 1357280 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:24:11.013808 1357280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:11.039931 1357280 system_svc.go:56] duration metric: took 26.17377ms WaitForService to wait for kubelet
	I1027 23:24:11.039961 1357280 kubeadm.go:587] duration metric: took 12.308965281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:24:11.039981 1357280 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:24:11.046418 1357280 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:24:11.046451 1357280 node_conditions.go:123] node cpu capacity is 2
	I1027 23:24:11.046464 1357280 node_conditions.go:105] duration metric: took 6.477851ms to run NodePressure ...
	I1027 23:24:11.046477 1357280 start.go:242] waiting for startup goroutines ...
	I1027 23:24:11.046484 1357280 start.go:247] waiting for cluster config update ...
	I1027 23:24:11.046495 1357280 start.go:256] writing updated cluster config ...
	I1027 23:24:11.046788 1357280 ssh_runner.go:195] Run: rm -f paused
	I1027 23:24:11.050730 1357280 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:11.057569 1357280 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-zmrh9" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:24:13.065502 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:11.397828 1355720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.crt ...
	I1027 23:24:11.397863 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.crt: {Name:mk246faa386b3d632d180b2ddb2a2af262a530fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.398076 1355720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key ...
	I1027 23:24:11.398093 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key: {Name:mk1b16d53560d716c6187e1f2fd113fce11edbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.398187 1355720 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321
	I1027 23:24:11.398202 1355720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 23:24:11.932196 1355720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321 ...
	I1027 23:24:11.932227 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321: {Name:mk526f75af43fe7a780cc0ce069546e301aae526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.932414 1355720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321 ...
	I1027 23:24:11.932429 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321: {Name:mk309f8b38f38f0ba578115f11af46e18b11b566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.932521 1355720 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt
	I1027 23:24:11.932604 1355720 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key
	I1027 23:24:11.932665 1355720 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key
	I1027 23:24:11.932684 1355720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt with IP's: []
	I1027 23:24:12.846535 1355720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt ...
	I1027 23:24:12.846568 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt: {Name:mk17fa605865835ca4425e4ef85856b55ea972fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:12.846773 1355720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key ...
	I1027 23:24:12.846791 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key: {Name:mk8233c77ac72dd69e58085a1456a8b1640fd665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:12.846981 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:24:12.847025 1355720 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:24:12.847037 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:24:12.847061 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:24:12.847090 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:24:12.847117 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:24:12.847158 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:24:12.847717 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:24:12.887693 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:24:12.906482 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:24:12.927072 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:24:12.946691 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:24:12.965381 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 23:24:12.984625 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:24:13.004031 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:24:13.024560 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:24:13.043854 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:24:13.064312 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:24:13.084086 1355720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:24:13.098928 1355720 ssh_runner.go:195] Run: openssl version
	I1027 23:24:13.105847 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:24:13.115229 1355720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:24:13.119718 1355720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:24:13.119787 1355720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:24:13.161089 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:24:13.171745 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:24:13.180869 1355720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:24:13.185547 1355720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:24:13.185621 1355720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:24:13.228019 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:24:13.237105 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:24:13.246006 1355720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:24:13.250328 1355720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:24:13.250452 1355720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:24:13.292905 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:24:13.303651 1355720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:24:13.307909 1355720 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:24:13.308008 1355720 kubeadm.go:401] StartCluster: {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:24:13.308096 1355720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:24:13.308155 1355720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:24:13.338339 1355720 cri.go:89] found id: ""
	I1027 23:24:13.338488 1355720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:24:13.347388 1355720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:24:13.356296 1355720 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 23:24:13.356367 1355720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:24:13.365577 1355720 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:24:13.365603 1355720 kubeadm.go:158] found existing configuration files:
	
	I1027 23:24:13.365701 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:24:13.376171 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:24:13.376264 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:24:13.384677 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:24:13.393189 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:24:13.393307 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:24:13.401913 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:24:13.410514 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:24:13.410633 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:24:13.418982 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:24:13.427782 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:24:13.427901 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:24:13.439057 1355720 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 23:24:13.527225 1355720 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:24:13.527484 1355720 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:24:13.597189 1355720 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1027 23:24:15.065671 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:17.564434 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:20.067381 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:22.069641 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:24.072187 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:26.568587 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:29.069598 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:31.979398 1355720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:24:31.979462 1355720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:24:31.979557 1355720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 23:24:31.979618 1355720 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 23:24:31.979658 1355720 kubeadm.go:319] OS: Linux
	I1027 23:24:31.979708 1355720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 23:24:31.979762 1355720 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 23:24:31.979814 1355720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 23:24:31.979868 1355720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 23:24:31.979922 1355720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 23:24:31.979978 1355720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 23:24:31.980031 1355720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 23:24:31.980085 1355720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 23:24:31.980139 1355720 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 23:24:31.980218 1355720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:24:31.980320 1355720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:24:31.980425 1355720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:24:31.980494 1355720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:24:31.983788 1355720 out.go:252]   - Generating certificates and keys ...
	I1027 23:24:31.983940 1355720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:24:31.984039 1355720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:24:31.984166 1355720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:24:31.984255 1355720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:24:31.984355 1355720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:24:31.984416 1355720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:24:31.984481 1355720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:24:31.984620 1355720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-947754] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 23:24:31.984683 1355720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:24:31.984820 1355720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-947754] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 23:24:31.984897 1355720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:24:31.984973 1355720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:24:31.985027 1355720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:24:31.985094 1355720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:24:31.985160 1355720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:24:31.985228 1355720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:24:31.985295 1355720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:24:31.985373 1355720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:24:31.985439 1355720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:24:31.985532 1355720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:24:31.985608 1355720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:24:31.988781 1355720 out.go:252]   - Booting up control plane ...
	I1027 23:24:31.988904 1355720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:24:31.989001 1355720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:24:31.989081 1355720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:24:31.989202 1355720 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:24:31.989309 1355720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:24:31.989429 1355720 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:24:31.989527 1355720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:24:31.989576 1355720 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:24:31.989727 1355720 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:24:31.989849 1355720 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:24:31.989926 1355720 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502318568s
	I1027 23:24:31.990049 1355720 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:24:31.990142 1355720 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 23:24:31.990245 1355720 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:24:31.990335 1355720 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:24:31.990507 1355720 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.875689415s
	I1027 23:24:31.990611 1355720 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.579022358s
	I1027 23:24:31.990725 1355720 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.503198217s
	I1027 23:24:31.990966 1355720 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:24:31.991152 1355720 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:24:31.991281 1355720 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:24:31.991519 1355720 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-947754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:24:31.991608 1355720 kubeadm.go:319] [bootstrap-token] Using token: ii6ez7.m5js9anpys51h0g4
	I1027 23:24:31.994958 1355720 out.go:252]   - Configuring RBAC rules ...
	I1027 23:24:31.995172 1355720 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:24:31.995270 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:24:31.995419 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:24:31.995598 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:24:31.995761 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:24:31.995887 1355720 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:24:31.996051 1355720 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:24:31.996129 1355720 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:24:31.996203 1355720 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:24:31.996214 1355720 kubeadm.go:319] 
	I1027 23:24:31.996293 1355720 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:24:31.996303 1355720 kubeadm.go:319] 
	I1027 23:24:31.996414 1355720 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:24:31.996426 1355720 kubeadm.go:319] 
	I1027 23:24:31.996470 1355720 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:24:31.996555 1355720 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:24:31.996634 1355720 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:24:31.996645 1355720 kubeadm.go:319] 
	I1027 23:24:31.996719 1355720 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:24:31.996729 1355720 kubeadm.go:319] 
	I1027 23:24:31.996795 1355720 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:24:31.996836 1355720 kubeadm.go:319] 
	I1027 23:24:31.996916 1355720 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:24:31.997043 1355720 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:24:31.997138 1355720 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:24:31.997171 1355720 kubeadm.go:319] 
	I1027 23:24:31.997283 1355720 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:24:31.997410 1355720 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:24:31.997421 1355720 kubeadm.go:319] 
	I1027 23:24:31.997547 1355720 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ii6ez7.m5js9anpys51h0g4 \
	I1027 23:24:31.997701 1355720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:24:31.997752 1355720 kubeadm.go:319] 	--control-plane 
	I1027 23:24:31.997764 1355720 kubeadm.go:319] 
	I1027 23:24:31.997869 1355720 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:24:31.997897 1355720 kubeadm.go:319] 
	I1027 23:24:31.998022 1355720 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ii6ez7.m5js9anpys51h0g4 \
	I1027 23:24:31.998183 1355720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:24:31.998197 1355720 cni.go:84] Creating CNI manager for ""
	I1027 23:24:31.998214 1355720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:24:32.003777 1355720 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1027 23:24:31.564157 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:33.565413 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:32.006973 1355720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:24:32.020250 1355720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:24:32.020287 1355720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:24:32.080823 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:24:32.533635 1355720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:24:32.533765 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:32.533833 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-947754 minikube.k8s.io/updated_at=2025_10_27T23_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=no-preload-947754 minikube.k8s.io/primary=true
	I1027 23:24:32.920118 1355720 ops.go:34] apiserver oom_adj: -16
	I1027 23:24:32.920222 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:33.420715 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:33.920900 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:34.421025 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:34.921190 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:35.421003 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:35.920829 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:36.421176 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:36.921167 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:37.083564 1355720 kubeadm.go:1114] duration metric: took 4.54984352s to wait for elevateKubeSystemPrivileges
	I1027 23:24:37.083596 1355720 kubeadm.go:403] duration metric: took 23.775593845s to StartCluster
	I1027 23:24:37.083614 1355720 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:37.083679 1355720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:24:37.084689 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:37.084937 1355720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:24:37.085076 1355720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:24:37.085361 1355720 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:24:37.085405 1355720 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:24:37.085496 1355720 addons.go:69] Setting storage-provisioner=true in profile "no-preload-947754"
	I1027 23:24:37.085512 1355720 addons.go:238] Setting addon storage-provisioner=true in "no-preload-947754"
	I1027 23:24:37.085537 1355720 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:24:37.086043 1355720 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:24:37.086578 1355720 addons.go:69] Setting default-storageclass=true in profile "no-preload-947754"
	I1027 23:24:37.086611 1355720 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-947754"
	I1027 23:24:37.086913 1355720 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:24:37.088192 1355720 out.go:179] * Verifying Kubernetes components...
	I1027 23:24:37.090523 1355720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:24:37.121614 1355720 addons.go:238] Setting addon default-storageclass=true in "no-preload-947754"
	I1027 23:24:37.121655 1355720 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:24:37.122103 1355720 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:24:37.146430 1355720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:24:37.150297 1355720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:24:37.150320 1355720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:24:37.150403 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:24:37.157322 1355720 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:24:37.157346 1355720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:24:37.157417 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:24:37.194916 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:24:37.198602 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:24:37.503432 1355720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:24:37.503608 1355720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:24:37.556341 1355720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:24:37.585951 1355720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:24:38.302109 1355720 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 23:24:38.303479 1355720 node_ready.go:35] waiting up to 6m0s for node "no-preload-947754" to be "Ready" ...
	I1027 23:24:38.727247 1355720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.170822687s)
	I1027 23:24:38.727304 1355720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.141284448s)
	I1027 23:24:38.748703 1355720 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1027 23:24:36.064714 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:38.069288 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:38.751660 1355720 addons.go:514] duration metric: took 1.666232664s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:24:38.809427 1355720 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-947754" context rescaled to 1 replicas
	W1027 23:24:40.307394 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	I1027 23:24:39.566046 1357280 pod_ready.go:94] pod "coredns-5dd5756b68-zmrh9" is "Ready"
	I1027 23:24:39.566079 1357280 pod_ready.go:86] duration metric: took 28.508476341s for pod "coredns-5dd5756b68-zmrh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.570068 1357280 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.579481 1357280 pod_ready.go:94] pod "etcd-old-k8s-version-477179" is "Ready"
	I1027 23:24:39.579508 1357280 pod_ready.go:86] duration metric: took 9.413631ms for pod "etcd-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.584766 1357280 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.607206 1357280 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-477179" is "Ready"
	I1027 23:24:39.607237 1357280 pod_ready.go:86] duration metric: took 22.438068ms for pod "kube-apiserver-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.617005 1357280 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.764475 1357280 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-477179" is "Ready"
	I1027 23:24:39.764503 1357280 pod_ready.go:86] duration metric: took 147.470144ms for pod "kube-controller-manager-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.964654 1357280 pod_ready.go:83] waiting for pod "kube-proxy-t6hvl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.361910 1357280 pod_ready.go:94] pod "kube-proxy-t6hvl" is "Ready"
	I1027 23:24:40.361937 1357280 pod_ready.go:86] duration metric: took 397.250789ms for pod "kube-proxy-t6hvl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.563045 1357280 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.961669 1357280 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-477179" is "Ready"
	I1027 23:24:40.961744 1357280 pod_ready.go:86] duration metric: took 398.672381ms for pod "kube-scheduler-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.961771 1357280 pod_ready.go:40] duration metric: took 29.911007605s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:41.052570 1357280 start.go:626] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1027 23:24:41.056036 1357280 out.go:203] 
	W1027 23:24:41.059137 1357280 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 23:24:41.062116 1357280 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 23:24:41.065014 1357280 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-477179" cluster and "default" namespace by default
	W1027 23:24:42.806450 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	W1027 23:24:45.311181 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	W1027 23:24:47.807211 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	W1027 23:24:50.306666 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	I1027 23:24:51.807192 1355720 node_ready.go:49] node "no-preload-947754" is "Ready"
	I1027 23:24:51.807221 1355720 node_ready.go:38] duration metric: took 13.503716834s for node "no-preload-947754" to be "Ready" ...
	I1027 23:24:51.807235 1355720 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:24:51.807298 1355720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:24:51.822552 1355720 api_server.go:72] duration metric: took 14.737576268s to wait for apiserver process to appear ...
	I1027 23:24:51.822582 1355720 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:24:51.822602 1355720 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:24:51.831354 1355720 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 23:24:51.832607 1355720 api_server.go:141] control plane version: v1.34.1
	I1027 23:24:51.832630 1355720 api_server.go:131] duration metric: took 10.041045ms to wait for apiserver health ...
	I1027 23:24:51.832639 1355720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:24:51.838485 1355720 system_pods.go:59] 8 kube-system pods found
	I1027 23:24:51.838580 1355720 system_pods.go:61] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:51.838605 1355720 system_pods.go:61] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:51.838639 1355720 system_pods.go:61] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:51.838674 1355720 system_pods.go:61] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:51.838696 1355720 system_pods.go:61] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:51.838725 1355720 system_pods.go:61] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:51.838745 1355720 system_pods.go:61] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:51.838777 1355720 system_pods.go:61] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:24:51.838808 1355720 system_pods.go:74] duration metric: took 6.161876ms to wait for pod list to return data ...
	I1027 23:24:51.838835 1355720 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:24:51.844992 1355720 default_sa.go:45] found service account: "default"
	I1027 23:24:51.845015 1355720 default_sa.go:55] duration metric: took 6.160817ms for default service account to be created ...
	I1027 23:24:51.845025 1355720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:24:51.850351 1355720 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:51.850409 1355720 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:51.850416 1355720 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:51.850422 1355720 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:51.850427 1355720 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:51.850435 1355720 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:51.850439 1355720 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:51.850443 1355720 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:51.850449 1355720 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:24:51.850470 1355720 retry.go:31] will retry after 197.101245ms: missing components: kube-dns
	I1027 23:24:52.052294 1355720 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:52.052382 1355720 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:52.052411 1355720 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:52.052433 1355720 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:52.052462 1355720 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:52.052496 1355720 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:52.052529 1355720 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:52.052548 1355720 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:52.052568 1355720 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:24:52.052619 1355720 retry.go:31] will retry after 379.464834ms: missing components: kube-dns
	I1027 23:24:52.436838 1355720 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:52.436873 1355720 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:52.436881 1355720 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:52.436887 1355720 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:52.436891 1355720 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:52.436895 1355720 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:52.436899 1355720 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:52.436907 1355720 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:52.436911 1355720 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Running
	I1027 23:24:52.436919 1355720 system_pods.go:126] duration metric: took 591.88821ms to wait for k8s-apps to be running ...
	I1027 23:24:52.436927 1355720 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:24:52.436982 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:52.461262 1355720 system_svc.go:56] duration metric: took 24.323971ms WaitForService to wait for kubelet
	I1027 23:24:52.461359 1355720 kubeadm.go:587] duration metric: took 15.376371831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:24:52.461397 1355720 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:24:52.465404 1355720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:24:52.465434 1355720 node_conditions.go:123] node cpu capacity is 2
	I1027 23:24:52.465445 1355720 node_conditions.go:105] duration metric: took 4.027593ms to run NodePressure ...
	I1027 23:24:52.465458 1355720 start.go:242] waiting for startup goroutines ...
	I1027 23:24:52.465465 1355720 start.go:247] waiting for cluster config update ...
	I1027 23:24:52.465476 1355720 start.go:256] writing updated cluster config ...
	I1027 23:24:52.465763 1355720 ssh_runner.go:195] Run: rm -f paused
	I1027 23:24:52.471293 1355720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:52.488064 1355720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.494269 1355720 pod_ready.go:94] pod "coredns-66bc5c9577-mzm5d" is "Ready"
	I1027 23:24:53.494295 1355720 pod_ready.go:86] duration metric: took 1.006194819s for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.497234 1355720 pod_ready.go:83] waiting for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.503606 1355720 pod_ready.go:94] pod "etcd-no-preload-947754" is "Ready"
	I1027 23:24:53.503633 1355720 pod_ready.go:86] duration metric: took 6.368155ms for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.506995 1355720 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.514313 1355720 pod_ready.go:94] pod "kube-apiserver-no-preload-947754" is "Ready"
	I1027 23:24:53.514343 1355720 pod_ready.go:86] duration metric: took 7.324353ms for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.517536 1355720 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.691896 1355720 pod_ready.go:94] pod "kube-controller-manager-no-preload-947754" is "Ready"
	I1027 23:24:53.691925 1355720 pod_ready.go:86] duration metric: took 174.356447ms for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.893112 1355720 pod_ready.go:83] waiting for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.292015 1355720 pod_ready.go:94] pod "kube-proxy-29878" is "Ready"
	I1027 23:24:54.292043 1355720 pod_ready.go:86] duration metric: took 398.88697ms for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.491334 1355720 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.892039 1355720 pod_ready.go:94] pod "kube-scheduler-no-preload-947754" is "Ready"
	I1027 23:24:54.892073 1355720 pod_ready.go:86] duration metric: took 400.705235ms for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.892086 1355720 pod_ready.go:40] duration metric: took 2.420709593s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:55.020810 1355720 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:24:55.026273 1355720 out.go:179] * Done! kubectl is now configured to use "no-preload-947754" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.552682784Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=72907c78-7598-44f5-8cb6-2f4c52dd3df6 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.554367246Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2ea2422e-85ae-4019-94e1-b3f4c907d017 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.555423294Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper" id=f3250598-8ba6-4bd3-8f28-77dd7e5681e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.555559181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.563001969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.563664576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.581844188Z" level=info msg="Created container 09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper" id=f3250598-8ba6-4bd3-8f28-77dd7e5681e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.583220364Z" level=info msg="Starting container: 09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386" id=5b7606a7-696e-4d3a-92d9-4c288ec398f6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.586904428Z" level=info msg="Started container" PID=1665 containerID=09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper id=5b7606a7-696e-4d3a-92d9-4c288ec398f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd
	Oct 27 23:24:44 old-k8s-version-477179 conmon[1663]: conmon 09ab5a46773af9e2116c <ninfo>: container 1665 exited with status 1
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.941765342Z" level=info msg="Removing container: cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84" id=d19c2299-8130-4e02-8139-b0fca3f4e3de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.951686097Z" level=info msg="Error loading conmon cgroup of container cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84: cgroup deleted" id=d19c2299-8130-4e02-8139-b0fca3f4e3de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.95750643Z" level=info msg="Removed container cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper" id=d19c2299-8130-4e02-8139-b0fca3f4e3de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.874672988Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.880015964Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.880049893Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.880073327Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.883264002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.883301844Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.883325951Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.887586057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.887619148Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.887644585Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.891185656Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.89121371Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	09ab5a46773af       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   975020566fbb0       dashboard-metrics-scraper-5f989dc9cf-7248x       kubernetes-dashboard
	9cda4094bfed5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   15283197f0e51       storage-provisioner                              kube-system
	76f54d3dbd7fd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   21 seconds ago      Running             kubernetes-dashboard        0                   04a5eb8aafba2       kubernetes-dashboard-8694d4445c-hnmb4            kubernetes-dashboard
	266c1e8038479       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           48 seconds ago      Running             kube-proxy                  1                   be3041f022c27       kube-proxy-t6hvl                                 kube-system
	08a2078427d64       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   77d3da93270f4       busybox                                          default
	8dd45d72c4796       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   1568be3a37133       coredns-5dd5756b68-zmrh9                         kube-system
	f6678a4bfdea0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   d37a5b86521a8       kindnet-z26d6                                    kube-system
	2aab2984cba3a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   15283197f0e51       storage-provisioner                              kube-system
	31d2036be45f7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   a5cd0a5f75890       kube-apiserver-old-k8s-version-477179            kube-system
	4cc4ea0f92239       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   8c87e0807307c       etcd-old-k8s-version-477179                      kube-system
	4df94ad74d55d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   fec393af28f76       kube-controller-manager-old-k8s-version-477179   kube-system
	0daf78b0c28b9       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   a2201c254522e       kube-scheduler-old-k8s-version-477179            kube-system
	
	
	==> coredns [8dd45d72c479651ba09d2be7f8a62f2c5eb7ccd81bf397242248fd631ff5c1e2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34565 - 4094 "HINFO IN 7565624524836270135.3906192045454744344. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016630036s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-477179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-477179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=old-k8s-version-477179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_22_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:22:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-477179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:24:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:23:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-477179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c71561b3-c618-4514-9439-9c8988ccb8a0
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-zmrh9                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-477179                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-z26d6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-477179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-477179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-t6hvl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-477179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7248x        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-hnmb4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 47s                    kube-proxy       
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-477179 event: Registered Node old-k8s-version-477179 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-477179 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                    node-controller  Node old-k8s-version-477179 event: Registered Node old-k8s-version-477179 in Controller
	
	
	==> dmesg <==
	[  +1.719322] overlayfs: idmapped layers are currently not supported
	[Oct27 23:00] overlayfs: idmapped layers are currently not supported
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4cc4ea0f92239fc9155b151efab480bb22dbf8b3551f7c315daae1493853f27f] <==
	{"level":"info","ts":"2025-10-27T23:23:59.034819Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T23:23:59.034864Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T23:23:59.035176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-27T23:23:59.035291Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-27T23:23:59.035448Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:23:59.035505Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:23:59.066305Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T23:23:59.082968Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T23:23:59.112141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T23:23:59.081669Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T23:23:59.112247Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T23:23:59.926254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T23:23:59.926358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T23:23:59.926722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T23:23:59.926787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.926821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.926871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.926919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.93328Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-477179 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T23:23:59.933452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T23:23:59.934512Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-27T23:23:59.934558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T23:23:59.935359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T23:23:59.960799Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T23:23:59.960891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:24:56 up  6:07,  0 user,  load average: 3.79, 3.66, 3.14
	Linux old-k8s-version-477179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f6678a4bfdea01a536baa38f2f64d3a12a42d128714d4a3edd59407299000596] <==
	I1027 23:24:07.640847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:24:07.651060       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:24:07.651410       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:24:07.651425       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:24:07.651557       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:24:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:24:07.915836       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:24:07.915954       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:24:07.917090       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:24:07.917294       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:24:37.916073       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:24:37.927193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:24:37.927298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:24:37.927420       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 23:24:39.418045       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:24:39.418141       1 metrics.go:72] Registering metrics
	I1027 23:24:39.418262       1 controller.go:711] "Syncing nftables rules"
	I1027 23:24:47.874327       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:24:47.874398       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31d2036be45f7a86c828442bcf45019e9bddf4f8b4f0001aa49eaad623860144] <==
	I1027 23:24:06.676875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 23:24:06.678134       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 23:24:06.681833       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1027 23:24:06.681859       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 23:24:06.683931       1 aggregator.go:166] initial CRD sync complete...
	I1027 23:24:06.683966       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 23:24:06.683973       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:24:06.683985       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:24:06.930156       1 trace.go:236] Trace[380303693]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:811577d7-4c42-451b-a3d9-a1a89005eef5,client:192.168.85.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (27-Oct-2025 23:24:06.405) (total time: 524ms):
	Trace[380303693]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-477179" already exists 94ms (23:24:06.930)
	Trace[380303693]: [524.229405ms] [524.229405ms] END
	I1027 23:24:06.943140       1 trace.go:236] Trace[1142693778]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ff83b1b8-36b6-4c16-89e0-68d941e611a3,client:192.168.85.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (27-Oct-2025 23:24:06.380) (total time: 562ms):
	Trace[1142693778]: [562.927011ms] [562.927011ms] END
	I1027 23:24:07.030421       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1027 23:24:07.045458       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:24:10.758608       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 23:24:10.828737       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 23:24:10.857429       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:24:10.872044       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:24:10.882978       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 23:24:10.942398       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.73.173"}
	I1027 23:24:10.966524       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.35.79"}
	I1027 23:24:20.389660       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:24:20.426834       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 23:24:20.431618       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4df94ad74d55d5841a5ebd671ae3a091cbc30efa3d08697d8baed42fd415cbf1] <==
	I1027 23:24:20.479000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.765µs"
	I1027 23:24:20.494340       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 23:24:20.510491       1 shared_informer.go:318] Caches are synced for stateful set
	I1027 23:24:20.515158       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7248x"
	I1027 23:24:20.532906       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-hnmb4"
	I1027 23:24:20.558442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.66317ms"
	I1027 23:24:20.575571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.71041ms"
	I1027 23:24:20.588710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="30.139708ms"
	I1027 23:24:20.588854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.824µs"
	I1027 23:24:20.646333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.58189ms"
	I1027 23:24:20.649238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.108µs"
	I1027 23:24:20.649438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.254µs"
	I1027 23:24:20.684949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.228µs"
	I1027 23:24:20.835517       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 23:24:20.910941       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 23:24:20.910971       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 23:24:28.909720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.109µs"
	I1027 23:24:29.919838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.095µs"
	I1027 23:24:30.958528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="139.998µs"
	I1027 23:24:34.968024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.419358ms"
	I1027 23:24:34.968201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.996µs"
	I1027 23:24:39.147489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.994244ms"
	I1027 23:24:39.147823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.331µs"
	I1027 23:24:44.960722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.705µs"
	I1027 23:24:50.867545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.949µs"
	
	
	==> kube-proxy [266c1e8038479147b3192edbb4966e537d86784dad76d9a4aa532c21689fc44c] <==
	I1027 23:24:08.457178       1 server_others.go:69] "Using iptables proxy"
	I1027 23:24:08.540892       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1027 23:24:08.603532       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:24:08.605450       1 server_others.go:152] "Using iptables Proxier"
	I1027 23:24:08.605537       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 23:24:08.605570       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 23:24:08.605627       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 23:24:08.605864       1 server.go:846] "Version info" version="v1.28.0"
	I1027 23:24:08.606220       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:24:08.622222       1 config.go:188] "Starting service config controller"
	I1027 23:24:08.622320       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 23:24:08.622364       1 config.go:97] "Starting endpoint slice config controller"
	I1027 23:24:08.622460       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 23:24:08.626073       1 config.go:315] "Starting node config controller"
	I1027 23:24:08.626155       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 23:24:08.723363       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 23:24:08.723410       1 shared_informer.go:318] Caches are synced for service config
	I1027 23:24:08.728777       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0daf78b0c28b92f6f69bc82b09d8267753a05593afe602cb3abe6fd2fe226dd4] <==
	I1027 23:24:01.093399       1 serving.go:348] Generated self-signed cert in-memory
	W1027 23:24:06.417479       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 23:24:06.417586       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 23:24:06.417620       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 23:24:06.417686       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 23:24:06.636641       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 23:24:06.638205       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:24:06.640409       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 23:24:06.640576       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:24:06.640620       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 23:24:06.640667       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 23:24:06.744564       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 23:24:09 old-k8s-version-477179 kubelet[775]: I1027 23:24:09.097430     775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.546519     775 topology_manager.go:215] "Topology Admit Handler" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-7248x"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.579037     775 topology_manager.go:215] "Topology Admit Handler" podUID="9af278b5-b4c3-4acf-a098-ffd7b10c75e5" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-hnmb4"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.722740     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhm8\" (UniqueName: \"kubernetes.io/projected/d7eada63-c5a5-4c7b-85da-87f01144acad-kube-api-access-wwhm8\") pod \"dashboard-metrics-scraper-5f989dc9cf-7248x\" (UID: \"d7eada63-c5a5-4c7b-85da-87f01144acad\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.722975     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777qz\" (UniqueName: \"kubernetes.io/projected/9af278b5-b4c3-4acf-a098-ffd7b10c75e5-kube-api-access-777qz\") pod \"kubernetes-dashboard-8694d4445c-hnmb4\" (UID: \"9af278b5-b4c3-4acf-a098-ffd7b10c75e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hnmb4"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.723088     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9af278b5-b4c3-4acf-a098-ffd7b10c75e5-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-hnmb4\" (UID: \"9af278b5-b4c3-4acf-a098-ffd7b10c75e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hnmb4"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.723200     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d7eada63-c5a5-4c7b-85da-87f01144acad-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7248x\" (UID: \"d7eada63-c5a5-4c7b-85da-87f01144acad\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: W1027 23:24:20.878676     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/crio-975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd WatchSource:0}: Error finding container 975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd: Status 404 returned error can't find the container with id 975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd
	Oct 27 23:24:28 old-k8s-version-477179 kubelet[775]: I1027 23:24:28.885646     775 scope.go:117] "RemoveContainer" containerID="c7c0eda28b5e0bd516731e19c372b7cbbefc18494146c5179c2fd902e0c632bf"
	Oct 27 23:24:29 old-k8s-version-477179 kubelet[775]: I1027 23:24:29.893089     775 scope.go:117] "RemoveContainer" containerID="c7c0eda28b5e0bd516731e19c372b7cbbefc18494146c5179c2fd902e0c632bf"
	Oct 27 23:24:29 old-k8s-version-477179 kubelet[775]: I1027 23:24:29.893439     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:29 old-k8s-version-477179 kubelet[775]: E1027 23:24:29.893789     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:30 old-k8s-version-477179 kubelet[775]: I1027 23:24:30.896888     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:30 old-k8s-version-477179 kubelet[775]: E1027 23:24:30.897340     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:37 old-k8s-version-477179 kubelet[775]: I1027 23:24:37.920583     775 scope.go:117] "RemoveContainer" containerID="2aab2984cba3a6ac659a5293f3fc709521e8bf4e3e62a456804c373f3774d3f5"
	Oct 27 23:24:37 old-k8s-version-477179 kubelet[775]: I1027 23:24:37.977637     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hnmb4" podStartSLOduration=4.230417368 podCreationTimestamp="2025-10-27 23:24:20 +0000 UTC" firstStartedPulling="2025-10-27 23:24:20.931343835 +0000 UTC m=+23.665230629" lastFinishedPulling="2025-10-27 23:24:34.678511143 +0000 UTC m=+37.412397945" observedRunningTime="2025-10-27 23:24:34.933157481 +0000 UTC m=+37.667044283" watchObservedRunningTime="2025-10-27 23:24:37.977584684 +0000 UTC m=+40.711471478"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: I1027 23:24:44.551984     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: I1027 23:24:44.940387     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: I1027 23:24:44.940623     775 scope.go:117] "RemoveContainer" containerID="09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: E1027 23:24:44.940949     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:50 old-k8s-version-477179 kubelet[775]: I1027 23:24:50.850042     775 scope.go:117] "RemoveContainer" containerID="09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386"
	Oct 27 23:24:50 old-k8s-version-477179 kubelet[775]: E1027 23:24:50.850945     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:53 old-k8s-version-477179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:24:53 old-k8s-version-477179 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:24:53 old-k8s-version-477179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [76f54d3dbd7fd7c913b3758a5fcab315050789c5914aa4cdea07154989d5e5c1] <==
	2025/10/27 23:24:34 Using namespace: kubernetes-dashboard
	2025/10/27 23:24:34 Using in-cluster config to connect to apiserver
	2025/10/27 23:24:34 Using secret token for csrf signing
	2025/10/27 23:24:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:24:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:24:34 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 23:24:34 Generating JWE encryption key
	2025/10/27 23:24:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:24:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:24:35 Initializing JWE encryption key from synchronized object
	2025/10/27 23:24:35 Creating in-cluster Sidecar client
	2025/10/27 23:24:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:24:35 Serving insecurely on HTTP port: 9090
	2025/10/27 23:24:34 Starting overwatch
	
	
	==> storage-provisioner [2aab2984cba3a6ac659a5293f3fc709521e8bf4e3e62a456804c373f3774d3f5] <==
	I1027 23:24:07.610250       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:24:37.613524       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9cda4094bfed5a639c35f0a169fc39a8317d45025263f0528ba134c879485b25] <==
	I1027 23:24:38.044406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:24:38.079720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:24:38.082595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 23:24:55.490624       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:24:55.491033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60ebd7b9-9b45-4373-8eb9-0ab942bf1b51", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-477179_3b29ac17-0d70-46cf-8990-79be41ea6022 became leader
	I1027 23:24:55.492599       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-477179_3b29ac17-0d70-46cf-8990-79be41ea6022!
	I1027 23:24:55.594073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-477179_3b29ac17-0d70-46cf-8990-79be41ea6022!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477179 -n old-k8s-version-477179
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477179 -n old-k8s-version-477179: exit status 2 (369.734426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-477179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-477179
helpers_test.go:243: (dbg) docker inspect old-k8s-version-477179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373",
	        "Created": "2025-10-27T23:22:26.560712085Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1357468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:23:49.686109518Z",
	            "FinishedAt": "2025-10-27T23:23:48.691324951Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/hostname",
	        "HostsPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/hosts",
	        "LogPath": "/var/lib/docker/containers/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373-json.log",
	        "Name": "/old-k8s-version-477179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-477179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-477179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373",
	                "LowerDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8f908fffe7b993d60442f64b7c5515882a75e6389218c999c1c83e3311e169e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-477179",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-477179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-477179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-477179",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-477179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63de14d2944c7bce7a5ea4094457e376b4b063c2f7f06143ff37bd59f1016daa",
	            "SandboxKey": "/var/run/docker/netns/63de14d2944c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34569"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34570"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34573"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34571"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34572"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-477179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:fd:3d:de:0d:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70c91a2d56ea508083256c63182c2c3e1ef772ce7bb88e6562d5b5aa2b7beeaf",
	                    "EndpointID": "1418147bf4af69a6ecf9086788999d087e7479730e07462d7aafdeab78ca7332",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-477179",
	                        "431f1160e1d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179: exit status 2 (444.965617ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-477179 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-477179 logs -n 25: (1.321452197s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-440075 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo docker system info                                                                                                                                                                                                      │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-477179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo containerd config dump                                                                                                                                                                                                  │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ stop    │ -p old-k8s-version-477179 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crio config                                                                                                                                                                                                             │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ delete  │ -p bridge-440075                                                                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:23:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:23:49.343502 1357280 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:23:49.343614 1357280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:23:49.343620 1357280 out.go:374] Setting ErrFile to fd 2...
	I1027 23:23:49.343624 1357280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:23:49.343865 1357280 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:23:49.344241 1357280 out.go:368] Setting JSON to false
	I1027 23:23:49.345088 1357280 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21979,"bootTime":1761585451,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:23:49.345164 1357280 start.go:143] virtualization:  
	I1027 23:23:49.348574 1357280 out.go:179] * [old-k8s-version-477179] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:23:49.352436 1357280 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:23:49.352563 1357280 notify.go:221] Checking for updates...
	I1027 23:23:49.358460 1357280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:23:49.361369 1357280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:23:49.364172 1357280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:23:49.366918 1357280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:23:49.369750 1357280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:23:49.373172 1357280 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:23:49.376526 1357280 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1027 23:23:49.379402 1357280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:23:49.419764 1357280 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:23:49.419864 1357280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:23:49.497624 1357280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-27 23:23:49.488329915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:23:49.497736 1357280 docker.go:318] overlay module found
	I1027 23:23:49.501507 1357280 out.go:179] * Using the docker driver based on existing profile
	I1027 23:23:46.473847 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:23:46.473879 1355720 ubuntu.go:182] provisioning hostname "no-preload-947754"
	I1027 23:23:46.473947 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:46.490222 1355720 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:46.490573 1355720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34564 <nil> <nil>}
	I1027 23:23:46.490593 1355720 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-947754 && echo "no-preload-947754" | sudo tee /etc/hostname
	I1027 23:23:46.647386 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:23:46.647521 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:46.664449 1355720 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:46.664752 1355720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34564 <nil> <nil>}
	I1027 23:23:46.664774 1355720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-947754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-947754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-947754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:23:46.814627 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:23:46.814658 1355720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:23:46.814687 1355720 ubuntu.go:190] setting up certificates
	I1027 23:23:46.814697 1355720 provision.go:84] configureAuth start
	I1027 23:23:46.814758 1355720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:23:46.831711 1355720 provision.go:143] copyHostCerts
	I1027 23:23:46.831779 1355720 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:23:46.831794 1355720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:23:46.831876 1355720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:23:46.831970 1355720 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:23:46.831979 1355720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:23:46.832004 1355720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:23:46.832087 1355720 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:23:46.832098 1355720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:23:46.832122 1355720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:23:46.832181 1355720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.no-preload-947754 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-947754]
	I1027 23:23:47.157243 1355720 provision.go:177] copyRemoteCerts
	I1027 23:23:47.157313 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:23:47.157369 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.176200 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.282333 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:23:47.299962 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:23:47.317742 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:23:47.335119 1355720 provision.go:87] duration metric: took 520.399235ms to configureAuth
	I1027 23:23:47.335152 1355720 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:23:47.335350 1355720 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:23:47.335459 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.356760 1355720 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:47.357076 1355720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34564 <nil> <nil>}
	I1027 23:23:47.357092 1355720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:23:47.615571 1355720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:23:47.615637 1355720 machine.go:97] duration metric: took 4.308645977s to provisionDockerMachine
	I1027 23:23:47.615666 1355720 client.go:176] duration metric: took 6.600769648s to LocalClient.Create
	I1027 23:23:47.615703 1355720 start.go:167] duration metric: took 6.60085929s to libmachine.API.Create "no-preload-947754"
	I1027 23:23:47.615723 1355720 start.go:293] postStartSetup for "no-preload-947754" (driver="docker")
	I1027 23:23:47.615775 1355720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:23:47.615857 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:23:47.615936 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.634115 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.738627 1355720 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:23:47.741837 1355720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:23:47.741884 1355720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:23:47.741896 1355720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:23:47.741954 1355720 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:23:47.742059 1355720 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:23:47.742166 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:23:47.749574 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:23:47.766533 1355720 start.go:296] duration metric: took 150.780907ms for postStartSetup
	I1027 23:23:47.766886 1355720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:23:47.783410 1355720 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/config.json ...
	I1027 23:23:47.783688 1355720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:23:47.783739 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.799803 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.899210 1355720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:23:47.903614 1355720 start.go:128] duration metric: took 6.894564937s to createHost
	I1027 23:23:47.903674 1355720 start.go:83] releasing machines lock for "no-preload-947754", held for 6.894725357s
	I1027 23:23:47.903762 1355720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:23:47.920221 1355720 ssh_runner.go:195] Run: cat /version.json
	I1027 23:23:47.920274 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.920511 1355720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:23:47.920565 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:23:47.942091 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:47.952162 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:23:48.046462 1355720 ssh_runner.go:195] Run: systemctl --version
	I1027 23:23:48.144559 1355720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:23:48.178095 1355720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:23:48.182509 1355720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:23:48.182605 1355720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:23:48.211189 1355720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 23:23:48.211227 1355720 start.go:496] detecting cgroup driver to use...
	I1027 23:23:48.211259 1355720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:23:48.211320 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:23:48.227946 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:23:48.240798 1355720 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:23:48.240863 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:23:48.258350 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:23:48.276829 1355720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:23:48.395460 1355720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:23:48.521808 1355720 docker.go:234] disabling docker service ...
	I1027 23:23:48.521899 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:23:48.545358 1355720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:23:48.559265 1355720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:23:48.699059 1355720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:23:48.887597 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:23:48.913808 1355720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:23:48.928856 1355720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:23:48.928936 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.945902 1355720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:23:48.945986 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.956109 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.966845 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:48.983374 1355720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:23:49.027241 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:49.038627 1355720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:49.058231 1355720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:49.069839 1355720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:23:49.077946 1355720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:23:49.085725 1355720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:49.236177 1355720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:23:49.394971 1355720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:23:49.395052 1355720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:23:49.403157 1355720 start.go:564] Will wait 60s for crictl version
	I1027 23:23:49.403227 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:49.410205 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:23:49.461289 1355720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:23:49.461382 1355720 ssh_runner.go:195] Run: crio --version
	I1027 23:23:49.500021 1355720 ssh_runner.go:195] Run: crio --version
	I1027 23:23:49.557119 1355720 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:23:49.504387 1357280 start.go:307] selected driver: docker
	I1027 23:23:49.504412 1357280 start.go:928] validating driver "docker" against &{Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:23:49.504531 1357280 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:23:49.505236 1357280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:23:49.587772 1357280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-27 23:23:49.578143773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:23:49.588125 1357280 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:23:49.588159 1357280 cni.go:84] Creating CNI manager for ""
	I1027 23:23:49.588211 1357280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:23:49.588256 1357280 start.go:351] cluster config:
	{Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:23:49.591802 1357280 out.go:179] * Starting "old-k8s-version-477179" primary control-plane node in "old-k8s-version-477179" cluster
	I1027 23:23:49.594749 1357280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:23:49.597783 1357280 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:23:49.600633 1357280 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 23:23:49.600687 1357280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1027 23:23:49.600713 1357280 cache.go:59] Caching tarball of preloaded images
	I1027 23:23:49.600792 1357280 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:23:49.600800 1357280 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 23:23:49.600906 1357280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/config.json ...
	I1027 23:23:49.601116 1357280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:23:49.624743 1357280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:23:49.624773 1357280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:23:49.624787 1357280 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:23:49.624815 1357280 start.go:360] acquireMachinesLock for old-k8s-version-477179: {Name:mka53febc0a54f4faa3bdae2e66b439a96a1b896 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:23:49.624891 1357280 start.go:364] duration metric: took 33.223µs to acquireMachinesLock for "old-k8s-version-477179"
	I1027 23:23:49.624914 1357280 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:23:49.624919 1357280 fix.go:55] fixHost starting: 
	I1027 23:23:49.625178 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:49.650118 1357280 fix.go:113] recreateIfNeeded on old-k8s-version-477179: state=Stopped err=<nil>
	W1027 23:23:49.650150 1357280 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:23:49.560033 1355720 cli_runner.go:164] Run: docker network inspect no-preload-947754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:23:49.592045 1355720 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 23:23:49.596061 1355720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:23:49.607312 1355720 kubeadm.go:884] updating cluster {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:23:49.607425 1355720 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:23:49.607468 1355720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:23:49.641704 1355720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 23:23:49.641732 1355720 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1027 23:23:49.641791 1355720 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:49.641797 1355720 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:49.641889 1355720 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:49.642126 1355720 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 23:23:49.642182 1355720 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.642339 1355720 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:49.642424 1355720 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:49.642599 1355720 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:49.643420 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:49.643964 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:49.644734 1355720 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:49.645120 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.645316 1355720 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:49.645478 1355720 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 23:23:49.645629 1355720 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:49.646478 1355720 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:49.876926 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:49.877581 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:49.886274 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1027 23:23:49.886805 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:49.887080 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:49.890308 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.897762 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.155203 1355720 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1027 23:23:50.155253 1355720 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.155302 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.155389 1355720 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1027 23:23:50.155411 1355720 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.155439 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.183389 1355720 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1027 23:23:50.183427 1355720 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.183476 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.183539 1355720 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1027 23:23:50.183552 1355720 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1027 23:23:50.183573 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.183616 1355720 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1027 23:23:50.183629 1355720 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.183649 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.192729 1355720 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1027 23:23:50.192768 1355720 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.192819 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.192870 1355720 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1027 23:23:50.192882 1355720 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:50.192906 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:50.192984 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.193033 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.193079 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.202542 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 23:23:50.202955 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.349093 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.349163 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.349202 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:50.349261 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.349313 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.367212 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 23:23:50.367293 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.541521 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 23:23:50.541639 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 23:23:50.541705 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.541740 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:50.541771 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 23:23:50.556178 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 23:23:50.556337 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 23:23:50.737191 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 23:23:50.737282 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 23:23:50.737554 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 23:23:50.737301 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1027 23:23:50.737556 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 23:23:50.737432 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1027 23:23:50.737702 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1027 23:23:50.737738 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1027 23:23:50.737450 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 23:23:50.737793 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1027 23:23:50.737406 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 23:23:50.737454 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 23:23:49.653573 1357280 out.go:252] * Restarting existing docker container for "old-k8s-version-477179" ...
	I1027 23:23:49.653670 1357280 cli_runner.go:164] Run: docker start old-k8s-version-477179
	I1027 23:23:49.948522 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:49.978345 1357280 kic.go:430] container "old-k8s-version-477179" state is running.
	I1027 23:23:49.978784 1357280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:23:50.016560 1357280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/config.json ...
	I1027 23:23:50.016832 1357280 machine.go:94] provisionDockerMachine start ...
	I1027 23:23:50.016921 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:50.049949 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:50.050285 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:50.050303 1357280 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:23:50.051175 1357280 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42288->127.0.0.1:34569: read: connection reset by peer
	I1027 23:23:53.214267 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-477179
	
	I1027 23:23:53.214302 1357280 ubuntu.go:182] provisioning hostname "old-k8s-version-477179"
	I1027 23:23:53.214365 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:53.236935 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:53.237250 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:53.237270 1357280 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-477179 && echo "old-k8s-version-477179" | sudo tee /etc/hostname
	I1027 23:23:53.412204 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-477179
	
	I1027 23:23:53.412307 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:53.439083 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:53.439393 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:53.439417 1357280 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-477179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-477179/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-477179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:23:53.599197 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:23:53.599276 1357280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:23:53.599312 1357280 ubuntu.go:190] setting up certificates
	I1027 23:23:53.599349 1357280 provision.go:84] configureAuth start
	I1027 23:23:53.599457 1357280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:23:53.621663 1357280 provision.go:143] copyHostCerts
	I1027 23:23:53.621740 1357280 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:23:53.621755 1357280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:23:53.621830 1357280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:23:53.621944 1357280 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:23:53.621950 1357280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:23:53.621977 1357280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:23:53.622049 1357280 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:23:53.622054 1357280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:23:53.622078 1357280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:23:53.622134 1357280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-477179 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-477179]
	I1027 23:23:53.937063 1357280 provision.go:177] copyRemoteCerts
	I1027 23:23:53.937187 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:23:53.937271 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:53.955807 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.063991 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:23:54.093343 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1027 23:23:54.118112 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:23:54.139377 1357280 provision.go:87] duration metric: took 539.988459ms to configureAuth
	I1027 23:23:54.139445 1357280 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:23:54.139666 1357280 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:23:54.139813 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.158334 1357280 main.go:143] libmachine: Using SSH client type: native
	I1027 23:23:54.158661 1357280 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34569 <nil> <nil>}
	I1027 23:23:54.158677 1357280 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:23:50.784971 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 23:23:50.785141 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 23:23:50.785228 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1027 23:23:50.785286 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1027 23:23:50.785372 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1027 23:23:50.785474 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1027 23:23:50.785579 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1027 23:23:50.785613 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1027 23:23:50.785698 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1027 23:23:50.785729 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1027 23:23:50.785805 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1027 23:23:50.785834 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1027 23:23:50.785949 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 23:23:50.786039 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 23:23:50.841352 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1027 23:23:50.841385 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1027 23:23:50.841440 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1027 23:23:50.841453 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1027 23:23:50.871091 1355720 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1027 23:23:50.871210 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1027 23:23:51.105665 1355720 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1027 23:23:51.105942 1355720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:51.269027 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1027 23:23:51.347155 1355720 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1027 23:23:51.347520 1355720 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:51.347602 1355720 ssh_runner.go:195] Run: which crictl
	I1027 23:23:51.358911 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 23:23:51.359026 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 23:23:51.417696 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:53.404713 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.045634467s)
	I1027 23:23:53.404737 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1027 23:23:53.404742 1355720 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.987014306s)
	I1027 23:23:53.404756 1355720 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1027 23:23:53.404803 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:53.404803 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1027 23:23:55.658586 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.253703673s)
	I1027 23:23:55.658610 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1027 23:23:55.658629 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 23:23:55.658675 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 23:23:55.658699 1355720 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.25387156s)
	I1027 23:23:55.658765 1355720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:54.546224 1357280 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:23:54.546293 1357280 machine.go:97] duration metric: took 4.529438507s to provisionDockerMachine
	I1027 23:23:54.546322 1357280 start.go:293] postStartSetup for "old-k8s-version-477179" (driver="docker")
	I1027 23:23:54.546366 1357280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:23:54.546476 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:23:54.546576 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.574167 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.679726 1357280 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:23:54.685927 1357280 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:23:54.685958 1357280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:23:54.685969 1357280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:23:54.686023 1357280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:23:54.686118 1357280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:23:54.686221 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:23:54.694922 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:23:54.719403 1357280 start.go:296] duration metric: took 173.048882ms for postStartSetup
	I1027 23:23:54.719489 1357280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:23:54.719562 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.745408 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.862023 1357280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:23:54.867796 1357280 fix.go:57] duration metric: took 5.242868765s for fixHost
	I1027 23:23:54.867825 1357280 start.go:83] releasing machines lock for "old-k8s-version-477179", held for 5.242923338s
	I1027 23:23:54.867897 1357280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-477179
	I1027 23:23:54.898735 1357280 ssh_runner.go:195] Run: cat /version.json
	I1027 23:23:54.898796 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.899030 1357280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:23:54.899093 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:54.935512 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:54.945178 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:55.164577 1357280 ssh_runner.go:195] Run: systemctl --version
	I1027 23:23:55.171829 1357280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:23:55.218305 1357280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:23:55.224476 1357280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:23:55.224550 1357280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:23:55.234467 1357280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:23:55.234532 1357280 start.go:496] detecting cgroup driver to use...
	I1027 23:23:55.234582 1357280 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:23:55.234653 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:23:55.251300 1357280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:23:55.266121 1357280 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:23:55.266230 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:23:55.283359 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:23:55.297861 1357280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:23:55.450015 1357280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:23:55.605338 1357280 docker.go:234] disabling docker service ...
	I1027 23:23:55.605463 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:23:55.622884 1357280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:23:55.642699 1357280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:23:55.801794 1357280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:23:55.950604 1357280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:23:55.965723 1357280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:23:55.980950 1357280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1027 23:23:55.981047 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:55.990499 1357280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:23:55.990594 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.000224 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.011612 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.022156 1357280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:23:56.032258 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.042887 1357280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.052588 1357280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:23:56.062861 1357280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:23:56.072071 1357280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:23:56.081184 1357280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:56.227637 1357280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:23:56.628927 1357280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:23:56.629040 1357280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:23:56.633326 1357280 start.go:564] Will wait 60s for crictl version
	I1027 23:23:56.633420 1357280 ssh_runner.go:195] Run: which crictl
	I1027 23:23:56.638681 1357280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:23:56.676931 1357280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:23:56.677038 1357280 ssh_runner.go:195] Run: crio --version
	I1027 23:23:56.743293 1357280 ssh_runner.go:195] Run: crio --version
	I1027 23:23:56.783878 1357280 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1027 23:23:56.787228 1357280 cli_runner.go:164] Run: docker network inspect old-k8s-version-477179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:23:56.809610 1357280 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:23:56.814099 1357280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:23:56.823930 1357280 kubeadm.go:884] updating cluster {Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:23:56.824060 1357280 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 23:23:56.824114 1357280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:23:56.864870 1357280 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:23:56.864970 1357280 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:23:56.865061 1357280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:23:56.894104 1357280 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:23:56.894125 1357280 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:23:56.894132 1357280 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1027 23:23:56.894242 1357280 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-477179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:23:56.894322 1357280 ssh_runner.go:195] Run: crio config
	I1027 23:23:56.971192 1357280 cni.go:84] Creating CNI manager for ""
	I1027 23:23:56.971261 1357280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:23:56.971301 1357280 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:23:56.971355 1357280 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-477179 NodeName:old-k8s-version-477179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:23:56.971544 1357280 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-477179"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:23:56.971633 1357280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1027 23:23:56.980463 1357280 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:23:56.980599 1357280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:23:56.988769 1357280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1027 23:23:57.002021 1357280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:23:57.019263 1357280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1027 23:23:57.050008 1357280 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:23:57.054579 1357280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:23:57.067759 1357280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:57.246509 1357280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:23:57.267506 1357280 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179 for IP: 192.168.85.2
	I1027 23:23:57.267530 1357280 certs.go:195] generating shared ca certs ...
	I1027 23:23:57.267549 1357280 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:57.267720 1357280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:23:57.267775 1357280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:23:57.267787 1357280 certs.go:257] generating profile certs ...
	I1027 23:23:57.267893 1357280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.key
	I1027 23:23:57.267974 1357280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key.e54ee9ff
	I1027 23:23:57.268023 1357280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key
	I1027 23:23:57.268168 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:23:57.268212 1357280 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:23:57.268225 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:23:57.268250 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:23:57.268286 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:23:57.268312 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:23:57.268366 1357280 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:23:57.269056 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:23:57.298976 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:23:57.323852 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:23:57.356487 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:23:57.385476 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 23:23:57.419133 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:23:57.463366 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:23:57.512692 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:23:57.562032 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:23:57.596237 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:23:57.629615 1357280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:23:57.672159 1357280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:23:57.689056 1357280 ssh_runner.go:195] Run: openssl version
	I1027 23:23:57.696102 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:23:57.706532 1357280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:23:57.711540 1357280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:23:57.711627 1357280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:23:57.755892 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:23:57.764990 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:23:57.775297 1357280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:23:57.779179 1357280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:23:57.779259 1357280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:23:57.825111 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:23:57.834033 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:23:57.843274 1357280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:23:57.847523 1357280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:23:57.847647 1357280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:23:57.895930 1357280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:23:57.904694 1357280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:23:57.909986 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:23:57.952870 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:23:58.049123 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:23:58.143084 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:23:58.206668 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:23:58.310997 1357280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:23:58.431912 1357280 kubeadm.go:401] StartCluster: {Name:old-k8s-version-477179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-477179 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:23:58.432067 1357280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:23:58.432172 1357280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:23:58.558818 1357280 cri.go:89] found id: "31d2036be45f7a86c828442bcf45019e9bddf4f8b4f0001aa49eaad623860144"
	I1027 23:23:58.558881 1357280 cri.go:89] found id: "4cc4ea0f92239fc9155b151efab480bb22dbf8b3551f7c315daae1493853f27f"
	I1027 23:23:58.558901 1357280 cri.go:89] found id: "4df94ad74d55d5841a5ebd671ae3a091cbc30efa3d08697d8baed42fd415cbf1"
	I1027 23:23:58.558928 1357280 cri.go:89] found id: "0daf78b0c28b92f6f69bc82b09d8267753a05593afe602cb3abe6fd2fe226dd4"
	I1027 23:23:58.558961 1357280 cri.go:89] found id: ""
	I1027 23:23:58.559043 1357280 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:23:58.630166 1357280 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:23:58Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:23:58.630306 1357280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:23:58.658530 1357280 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:23:58.658593 1357280 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:23:58.658678 1357280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:23:58.701530 1357280 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:23:58.702014 1357280 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-477179" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:23:58.702167 1357280 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-477179" cluster setting kubeconfig missing "old-k8s-version-477179" context setting]
	I1027 23:23:58.702511 1357280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:58.704072 1357280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:23:58.729812 1357280 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:23:58.729898 1357280 kubeadm.go:602] duration metric: took 71.284011ms to restartPrimaryControlPlane
	I1027 23:23:58.729922 1357280 kubeadm.go:403] duration metric: took 298.022711ms to StartCluster
	I1027 23:23:58.729966 1357280 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:58.730046 1357280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:23:58.730687 1357280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:23:58.730945 1357280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:23:58.731363 1357280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:23:58.731435 1357280 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-477179"
	I1027 23:23:58.731448 1357280 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-477179"
	W1027 23:23:58.731454 1357280 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:23:58.731474 1357280 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:58.732095 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.732452 1357280 config.go:182] Loaded profile config "old-k8s-version-477179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 23:23:58.732602 1357280 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-477179"
	I1027 23:23:58.732638 1357280 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-477179"
	I1027 23:23:58.732723 1357280 addons.go:69] Setting dashboard=true in profile "old-k8s-version-477179"
	I1027 23:23:58.732735 1357280 addons.go:238] Setting addon dashboard=true in "old-k8s-version-477179"
	W1027 23:23:58.732741 1357280 addons.go:247] addon dashboard should already be in state true
	I1027 23:23:58.732783 1357280 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:58.733248 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.733652 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.735104 1357280 out.go:179] * Verifying Kubernetes components...
	I1027 23:23:58.738050 1357280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:23:58.785101 1357280 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:23:58.785110 1357280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:23:58.790483 1357280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:23:58.790508 1357280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:23:58.790574 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:58.793691 1357280 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:23:58.795606 1357280 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-477179"
	W1027 23:23:58.795628 1357280 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:23:58.795651 1357280 host.go:66] Checking if "old-k8s-version-477179" exists ...
	I1027 23:23:58.796781 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:23:58.796797 1357280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:23:58.796869 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:58.797379 1357280 cli_runner.go:164] Run: docker container inspect old-k8s-version-477179 --format={{.State.Status}}
	I1027 23:23:58.845041 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:58.848705 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:58.858972 1357280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:23:58.858992 1357280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:23:58.859055 1357280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-477179
	I1027 23:23:58.883877 1357280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34569 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/old-k8s-version-477179/id_rsa Username:docker}
	I1027 23:23:59.196431 1357280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:23:59.243693 1357280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:23:57.633591 1355720 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.974806489s)
	I1027 23:23:57.633635 1355720 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1027 23:23:57.633682 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.97498108s)
	I1027 23:23:57.633704 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1027 23:23:57.633720 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1027 23:23:57.633727 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 23:23:57.633772 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 23:23:59.833991 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.20018977s)
	I1027 23:23:59.834015 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1027 23:23:59.834032 1355720 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 23:23:59.834077 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 23:23:59.834135 1355720 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.200406011s)
	I1027 23:23:59.834149 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1027 23:23:59.834163 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1027 23:23:59.401510 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:23:59.401585 1357280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:23:59.416039 1357280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:23:59.554900 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:23:59.554975 1357280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:23:59.653696 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:23:59.653771 1357280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:23:59.733256 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:23:59.733329 1357280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:23:59.788893 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:23:59.788970 1357280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:23:59.813322 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:23:59.813402 1357280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:23:59.860259 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:23:59.860342 1357280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:23:59.887899 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:23:59.887978 1357280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:23:59.926027 1357280 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:23:59.926101 1357280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:23:59.965000 1357280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:24:01.669636 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.835536362s)
	I1027 23:24:01.669708 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1027 23:24:01.669756 1355720 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1027 23:24:01.669839 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1027 23:24:09.304145 1357280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.107683297s)
	I1027 23:24:09.304469 1357280 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.060700642s)
	I1027 23:24:09.304498 1357280 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-477179" to be "Ready" ...
	I1027 23:24:06.736505 1355720 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.066619335s)
	I1027 23:24:06.736529 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1027 23:24:06.736547 1355720 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1027 23:24:06.736594 1355720 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1027 23:24:07.642312 1355720 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1027 23:24:07.642350 1355720 cache_images.go:125] Successfully loaded all cached images
	I1027 23:24:07.642356 1355720 cache_images.go:94] duration metric: took 18.000608839s to LoadCachedImages
	I1027 23:24:07.642367 1355720 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 23:24:07.642479 1355720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-947754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:24:07.642571 1355720 ssh_runner.go:195] Run: crio config
	I1027 23:24:07.731140 1355720 cni.go:84] Creating CNI manager for ""
	I1027 23:24:07.731166 1355720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:24:07.731188 1355720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:24:07.731214 1355720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-947754 NodeName:no-preload-947754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:24:07.731345 1355720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-947754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:24:07.731422 1355720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:24:07.745303 1355720 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1027 23:24:07.745364 1355720 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1027 23:24:07.761132 1355720 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1027 23:24:07.761223 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1027 23:24:07.762174 1355720 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1027 23:24:07.762745 1355720 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1027 23:24:07.772035 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1027 23:24:07.772072 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1027 23:24:08.833955 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:08.852088 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1027 23:24:08.859059 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1027 23:24:08.859092 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1027 23:24:09.206883 1355720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1027 23:24:09.233294 1355720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1027 23:24:09.233335 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1027 23:24:09.903349 1355720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:24:09.914716 1355720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:24:09.956939 1355720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:24:09.986063 1355720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 23:24:10.020733 1355720 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:24:10.030200 1355720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:24:10.050701 1355720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:24:10.285815 1355720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:24:10.305548 1355720 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754 for IP: 192.168.76.2
	I1027 23:24:10.305622 1355720 certs.go:195] generating shared ca certs ...
	I1027 23:24:10.305653 1355720 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:10.305834 1355720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:24:10.305915 1355720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:24:10.305949 1355720 certs.go:257] generating profile certs ...
	I1027 23:24:10.306030 1355720 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key
	I1027 23:24:10.306069 1355720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.crt with IP's: []
	I1027 23:24:09.393523 1357280 node_ready.go:49] node "old-k8s-version-477179" is "Ready"
	I1027 23:24:09.393549 1357280 node_ready.go:38] duration metric: took 89.039618ms for node "old-k8s-version-477179" to be "Ready" ...
	I1027 23:24:09.393565 1357280 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:24:09.393625 1357280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:24:10.388025 1357280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.971909082s)
	I1027 23:24:10.973967 1357280 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.580321707s)
	I1027 23:24:10.973996 1357280 api_server.go:72] duration metric: took 12.243001739s to wait for apiserver process to appear ...
	I1027 23:24:10.974002 1357280 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:24:10.974021 1357280 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:24:10.974556 1357280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.00945109s)
	I1027 23:24:10.977938 1357280 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-477179 addons enable metrics-server
	
	I1027 23:24:10.980919 1357280 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1027 23:24:10.983911 1357280 addons.go:514] duration metric: took 12.252533024s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1027 23:24:10.994846 1357280 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:24:10.996771 1357280 api_server.go:141] control plane version: v1.28.0
	I1027 23:24:10.996794 1357280 api_server.go:131] duration metric: took 22.784781ms to wait for apiserver health ...
	I1027 23:24:10.996803 1357280 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:24:11.005153 1357280 system_pods.go:59] 8 kube-system pods found
	I1027 23:24:11.005200 1357280 system_pods.go:61] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:11.005212 1357280 system_pods.go:61] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:24:11.005219 1357280 system_pods.go:61] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:24:11.005227 1357280 system_pods.go:61] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:24:11.005235 1357280 system_pods.go:61] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:24:11.005243 1357280 system_pods.go:61] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:24:11.005253 1357280 system_pods.go:61] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:24:11.005265 1357280 system_pods.go:61] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Running
	I1027 23:24:11.005272 1357280 system_pods.go:74] duration metric: took 8.463348ms to wait for pod list to return data ...
	I1027 23:24:11.005286 1357280 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:24:11.008614 1357280 default_sa.go:45] found service account: "default"
	I1027 23:24:11.008642 1357280 default_sa.go:55] duration metric: took 3.34984ms for default service account to be created ...
	I1027 23:24:11.008653 1357280 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:24:11.013637 1357280 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:11.013672 1357280 system_pods.go:89] "coredns-5dd5756b68-zmrh9" [da1efa5b-0929-4757-a96a-7b030212b09b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:11.013680 1357280 system_pods.go:89] "etcd-old-k8s-version-477179" [be864fb9-c8b5-4aae-bc2d-69d5d9d85994] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:24:11.013687 1357280 system_pods.go:89] "kindnet-z26d6" [3b032e58-90ac-4c80-95f1-1d1fcb2b96f3] Running
	I1027 23:24:11.013694 1357280 system_pods.go:89] "kube-apiserver-old-k8s-version-477179" [72d86f1f-8f08-49fe-bf99-ec1a3849859f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:24:11.013700 1357280 system_pods.go:89] "kube-controller-manager-old-k8s-version-477179" [78689547-e0c2-45a3-a2d8-2ee973b8d629] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:24:11.013706 1357280 system_pods.go:89] "kube-proxy-t6hvl" [2953b030-a25c-4882-9fab-7361700ee9ec] Running
	I1027 23:24:11.013712 1357280 system_pods.go:89] "kube-scheduler-old-k8s-version-477179" [b84fc635-c8d8-4276-9dc5-3c077b3cb355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:24:11.013717 1357280 system_pods.go:89] "storage-provisioner" [cbfbf2cd-d56e-4b50-80d3-178ee16d8c54] Running
	I1027 23:24:11.013729 1357280 system_pods.go:126] duration metric: took 5.070332ms to wait for k8s-apps to be running ...
	I1027 23:24:11.013748 1357280 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:24:11.013808 1357280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:11.039931 1357280 system_svc.go:56] duration metric: took 26.17377ms WaitForService to wait for kubelet
	I1027 23:24:11.039961 1357280 kubeadm.go:587] duration metric: took 12.308965281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:24:11.039981 1357280 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:24:11.046418 1357280 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:24:11.046451 1357280 node_conditions.go:123] node cpu capacity is 2
	I1027 23:24:11.046464 1357280 node_conditions.go:105] duration metric: took 6.477851ms to run NodePressure ...
	I1027 23:24:11.046477 1357280 start.go:242] waiting for startup goroutines ...
	I1027 23:24:11.046484 1357280 start.go:247] waiting for cluster config update ...
	I1027 23:24:11.046495 1357280 start.go:256] writing updated cluster config ...
	I1027 23:24:11.046788 1357280 ssh_runner.go:195] Run: rm -f paused
	I1027 23:24:11.050730 1357280 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:11.057569 1357280 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-zmrh9" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:24:13.065502 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:11.397828 1355720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.crt ...
	I1027 23:24:11.397863 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.crt: {Name:mk246faa386b3d632d180b2ddb2a2af262a530fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.398076 1355720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key ...
	I1027 23:24:11.398093 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key: {Name:mk1b16d53560d716c6187e1f2fd113fce11edbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.398187 1355720 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321
	I1027 23:24:11.398202 1355720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 23:24:11.932196 1355720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321 ...
	I1027 23:24:11.932227 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321: {Name:mk526f75af43fe7a780cc0ce069546e301aae526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.932414 1355720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321 ...
	I1027 23:24:11.932429 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321: {Name:mk309f8b38f38f0ba578115f11af46e18b11b566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:11.932521 1355720 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt.2667a321 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt
	I1027 23:24:11.932604 1355720 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key
	I1027 23:24:11.932665 1355720 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key
	I1027 23:24:11.932684 1355720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt with IP's: []
	I1027 23:24:12.846535 1355720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt ...
	I1027 23:24:12.846568 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt: {Name:mk17fa605865835ca4425e4ef85856b55ea972fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:12.846773 1355720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key ...
	I1027 23:24:12.846791 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key: {Name:mk8233c77ac72dd69e58085a1456a8b1640fd665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:12.846981 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:24:12.847025 1355720 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:24:12.847037 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:24:12.847061 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:24:12.847090 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:24:12.847117 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:24:12.847158 1355720 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:24:12.847717 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:24:12.887693 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:24:12.906482 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:24:12.927072 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:24:12.946691 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:24:12.965381 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 23:24:12.984625 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:24:13.004031 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:24:13.024560 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:24:13.043854 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:24:13.064312 1355720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:24:13.084086 1355720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:24:13.098928 1355720 ssh_runner.go:195] Run: openssl version
	I1027 23:24:13.105847 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:24:13.115229 1355720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:24:13.119718 1355720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:24:13.119787 1355720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:24:13.161089 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:24:13.171745 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:24:13.180869 1355720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:24:13.185547 1355720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:24:13.185621 1355720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:24:13.228019 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:24:13.237105 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:24:13.246006 1355720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:24:13.250328 1355720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:24:13.250452 1355720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:24:13.292905 1355720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:24:13.303651 1355720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:24:13.307909 1355720 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:24:13.308008 1355720 kubeadm.go:401] StartCluster: {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:24:13.308096 1355720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:24:13.308155 1355720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:24:13.338339 1355720 cri.go:89] found id: ""
	I1027 23:24:13.338488 1355720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:24:13.347388 1355720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:24:13.356296 1355720 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 23:24:13.356367 1355720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:24:13.365577 1355720 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:24:13.365603 1355720 kubeadm.go:158] found existing configuration files:
	
	I1027 23:24:13.365701 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:24:13.376171 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:24:13.376264 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:24:13.384677 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:24:13.393189 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:24:13.393307 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:24:13.401913 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:24:13.410514 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:24:13.410633 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:24:13.418982 1355720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:24:13.427782 1355720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:24:13.427901 1355720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:24:13.439057 1355720 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 23:24:13.527225 1355720 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:24:13.527484 1355720 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:24:13.597189 1355720 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1027 23:24:15.065671 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:17.564434 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:20.067381 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:22.069641 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:24.072187 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:26.568587 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:29.069598 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:31.979398 1355720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:24:31.979462 1355720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:24:31.979557 1355720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 23:24:31.979618 1355720 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 23:24:31.979658 1355720 kubeadm.go:319] OS: Linux
	I1027 23:24:31.979708 1355720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 23:24:31.979762 1355720 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 23:24:31.979814 1355720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 23:24:31.979868 1355720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 23:24:31.979922 1355720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 23:24:31.979978 1355720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 23:24:31.980031 1355720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 23:24:31.980085 1355720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 23:24:31.980139 1355720 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 23:24:31.980218 1355720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:24:31.980320 1355720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:24:31.980425 1355720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:24:31.980494 1355720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:24:31.983788 1355720 out.go:252]   - Generating certificates and keys ...
	I1027 23:24:31.983940 1355720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:24:31.984039 1355720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:24:31.984166 1355720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:24:31.984255 1355720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:24:31.984355 1355720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:24:31.984416 1355720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:24:31.984481 1355720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:24:31.984620 1355720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-947754] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 23:24:31.984683 1355720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:24:31.984820 1355720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-947754] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 23:24:31.984897 1355720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:24:31.984973 1355720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:24:31.985027 1355720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:24:31.985094 1355720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:24:31.985160 1355720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:24:31.985228 1355720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:24:31.985295 1355720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:24:31.985373 1355720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:24:31.985439 1355720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:24:31.985532 1355720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:24:31.985608 1355720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:24:31.988781 1355720 out.go:252]   - Booting up control plane ...
	I1027 23:24:31.988904 1355720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:24:31.989001 1355720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:24:31.989081 1355720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:24:31.989202 1355720 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:24:31.989309 1355720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:24:31.989429 1355720 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:24:31.989527 1355720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:24:31.989576 1355720 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:24:31.989727 1355720 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:24:31.989849 1355720 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:24:31.989926 1355720 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502318568s
	I1027 23:24:31.990049 1355720 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:24:31.990142 1355720 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 23:24:31.990245 1355720 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:24:31.990335 1355720 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:24:31.990507 1355720 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.875689415s
	I1027 23:24:31.990611 1355720 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.579022358s
	I1027 23:24:31.990725 1355720 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.503198217s
	I1027 23:24:31.990966 1355720 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:24:31.991152 1355720 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:24:31.991281 1355720 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:24:31.991519 1355720 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-947754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:24:31.991608 1355720 kubeadm.go:319] [bootstrap-token] Using token: ii6ez7.m5js9anpys51h0g4
	I1027 23:24:31.994958 1355720 out.go:252]   - Configuring RBAC rules ...
	I1027 23:24:31.995172 1355720 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:24:31.995270 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:24:31.995419 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:24:31.995598 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:24:31.995761 1355720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:24:31.995887 1355720 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:24:31.996051 1355720 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:24:31.996129 1355720 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:24:31.996203 1355720 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:24:31.996214 1355720 kubeadm.go:319] 
	I1027 23:24:31.996293 1355720 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:24:31.996303 1355720 kubeadm.go:319] 
	I1027 23:24:31.996414 1355720 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:24:31.996426 1355720 kubeadm.go:319] 
	I1027 23:24:31.996470 1355720 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:24:31.996555 1355720 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:24:31.996634 1355720 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:24:31.996645 1355720 kubeadm.go:319] 
	I1027 23:24:31.996719 1355720 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:24:31.996729 1355720 kubeadm.go:319] 
	I1027 23:24:31.996795 1355720 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:24:31.996836 1355720 kubeadm.go:319] 
	I1027 23:24:31.996916 1355720 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:24:31.997043 1355720 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:24:31.997138 1355720 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:24:31.997171 1355720 kubeadm.go:319] 
	I1027 23:24:31.997283 1355720 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:24:31.997410 1355720 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:24:31.997421 1355720 kubeadm.go:319] 
	I1027 23:24:31.997547 1355720 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ii6ez7.m5js9anpys51h0g4 \
	I1027 23:24:31.997701 1355720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:24:31.997752 1355720 kubeadm.go:319] 	--control-plane 
	I1027 23:24:31.997764 1355720 kubeadm.go:319] 
	I1027 23:24:31.997869 1355720 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:24:31.997897 1355720 kubeadm.go:319] 
	I1027 23:24:31.998022 1355720 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ii6ez7.m5js9anpys51h0g4 \
	I1027 23:24:31.998183 1355720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:24:31.998197 1355720 cni.go:84] Creating CNI manager for ""
	I1027 23:24:31.998214 1355720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:24:32.003777 1355720 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1027 23:24:31.564157 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:33.565413 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:32.006973 1355720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:24:32.020250 1355720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:24:32.020287 1355720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:24:32.080823 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:24:32.533635 1355720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:24:32.533765 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:32.533833 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-947754 minikube.k8s.io/updated_at=2025_10_27T23_24_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=no-preload-947754 minikube.k8s.io/primary=true
	I1027 23:24:32.920118 1355720 ops.go:34] apiserver oom_adj: -16
	I1027 23:24:32.920222 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:33.420715 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:33.920900 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:34.421025 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:34.921190 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:35.421003 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:35.920829 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:36.421176 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:36.921167 1355720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:24:37.083564 1355720 kubeadm.go:1114] duration metric: took 4.54984352s to wait for elevateKubeSystemPrivileges
	I1027 23:24:37.083596 1355720 kubeadm.go:403] duration metric: took 23.775593845s to StartCluster
	I1027 23:24:37.083614 1355720 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:37.083679 1355720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:24:37.084689 1355720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:24:37.084937 1355720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:24:37.085076 1355720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:24:37.085361 1355720 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:24:37.085405 1355720 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:24:37.085496 1355720 addons.go:69] Setting storage-provisioner=true in profile "no-preload-947754"
	I1027 23:24:37.085512 1355720 addons.go:238] Setting addon storage-provisioner=true in "no-preload-947754"
	I1027 23:24:37.085537 1355720 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:24:37.086043 1355720 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:24:37.086578 1355720 addons.go:69] Setting default-storageclass=true in profile "no-preload-947754"
	I1027 23:24:37.086611 1355720 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-947754"
	I1027 23:24:37.086913 1355720 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:24:37.088192 1355720 out.go:179] * Verifying Kubernetes components...
	I1027 23:24:37.090523 1355720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:24:37.121614 1355720 addons.go:238] Setting addon default-storageclass=true in "no-preload-947754"
	I1027 23:24:37.121655 1355720 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:24:37.122103 1355720 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:24:37.146430 1355720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:24:37.150297 1355720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:24:37.150320 1355720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:24:37.150403 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:24:37.157322 1355720 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:24:37.157346 1355720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:24:37.157417 1355720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:24:37.194916 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:24:37.198602 1355720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34564 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:24:37.503432 1355720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:24:37.503608 1355720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:24:37.556341 1355720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:24:37.585951 1355720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:24:38.302109 1355720 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 23:24:38.303479 1355720 node_ready.go:35] waiting up to 6m0s for node "no-preload-947754" to be "Ready" ...
	I1027 23:24:38.727247 1355720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.170822687s)
	I1027 23:24:38.727304 1355720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.141284448s)
	I1027 23:24:38.748703 1355720 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1027 23:24:36.064714 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	W1027 23:24:38.069288 1357280 pod_ready.go:104] pod "coredns-5dd5756b68-zmrh9" is not "Ready", error: <nil>
	I1027 23:24:38.751660 1355720 addons.go:514] duration metric: took 1.666232664s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:24:38.809427 1355720 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-947754" context rescaled to 1 replicas
	W1027 23:24:40.307394 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	I1027 23:24:39.566046 1357280 pod_ready.go:94] pod "coredns-5dd5756b68-zmrh9" is "Ready"
	I1027 23:24:39.566079 1357280 pod_ready.go:86] duration metric: took 28.508476341s for pod "coredns-5dd5756b68-zmrh9" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.570068 1357280 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.579481 1357280 pod_ready.go:94] pod "etcd-old-k8s-version-477179" is "Ready"
	I1027 23:24:39.579508 1357280 pod_ready.go:86] duration metric: took 9.413631ms for pod "etcd-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.584766 1357280 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.607206 1357280 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-477179" is "Ready"
	I1027 23:24:39.607237 1357280 pod_ready.go:86] duration metric: took 22.438068ms for pod "kube-apiserver-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.617005 1357280 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.764475 1357280 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-477179" is "Ready"
	I1027 23:24:39.764503 1357280 pod_ready.go:86] duration metric: took 147.470144ms for pod "kube-controller-manager-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:39.964654 1357280 pod_ready.go:83] waiting for pod "kube-proxy-t6hvl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.361910 1357280 pod_ready.go:94] pod "kube-proxy-t6hvl" is "Ready"
	I1027 23:24:40.361937 1357280 pod_ready.go:86] duration metric: took 397.250789ms for pod "kube-proxy-t6hvl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.563045 1357280 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.961669 1357280 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-477179" is "Ready"
	I1027 23:24:40.961744 1357280 pod_ready.go:86] duration metric: took 398.672381ms for pod "kube-scheduler-old-k8s-version-477179" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:40.961771 1357280 pod_ready.go:40] duration metric: took 29.911007605s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:41.052570 1357280 start.go:626] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1027 23:24:41.056036 1357280 out.go:203] 
	W1027 23:24:41.059137 1357280 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 23:24:41.062116 1357280 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 23:24:41.065014 1357280 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-477179" cluster and "default" namespace by default
	W1027 23:24:42.806450 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	W1027 23:24:45.311181 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	W1027 23:24:47.807211 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	W1027 23:24:50.306666 1355720 node_ready.go:57] node "no-preload-947754" has "Ready":"False" status (will retry)
	I1027 23:24:51.807192 1355720 node_ready.go:49] node "no-preload-947754" is "Ready"
	I1027 23:24:51.807221 1355720 node_ready.go:38] duration metric: took 13.503716834s for node "no-preload-947754" to be "Ready" ...
	I1027 23:24:51.807235 1355720 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:24:51.807298 1355720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:24:51.822552 1355720 api_server.go:72] duration metric: took 14.737576268s to wait for apiserver process to appear ...
	I1027 23:24:51.822582 1355720 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:24:51.822602 1355720 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:24:51.831354 1355720 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 23:24:51.832607 1355720 api_server.go:141] control plane version: v1.34.1
	I1027 23:24:51.832630 1355720 api_server.go:131] duration metric: took 10.041045ms to wait for apiserver health ...
	I1027 23:24:51.832639 1355720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:24:51.838485 1355720 system_pods.go:59] 8 kube-system pods found
	I1027 23:24:51.838580 1355720 system_pods.go:61] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:51.838605 1355720 system_pods.go:61] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:51.838639 1355720 system_pods.go:61] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:51.838674 1355720 system_pods.go:61] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:51.838696 1355720 system_pods.go:61] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:51.838725 1355720 system_pods.go:61] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:51.838745 1355720 system_pods.go:61] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:51.838777 1355720 system_pods.go:61] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:24:51.838808 1355720 system_pods.go:74] duration metric: took 6.161876ms to wait for pod list to return data ...
	I1027 23:24:51.838835 1355720 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:24:51.844992 1355720 default_sa.go:45] found service account: "default"
	I1027 23:24:51.845015 1355720 default_sa.go:55] duration metric: took 6.160817ms for default service account to be created ...
	I1027 23:24:51.845025 1355720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:24:51.850351 1355720 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:51.850409 1355720 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:51.850416 1355720 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:51.850422 1355720 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:51.850427 1355720 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:51.850435 1355720 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:51.850439 1355720 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:51.850443 1355720 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:51.850449 1355720 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:24:51.850470 1355720 retry.go:31] will retry after 197.101245ms: missing components: kube-dns
	I1027 23:24:52.052294 1355720 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:52.052382 1355720 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:52.052411 1355720 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:52.052433 1355720 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:52.052462 1355720 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:52.052496 1355720 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:52.052529 1355720 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:52.052548 1355720 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:52.052568 1355720 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:24:52.052619 1355720 retry.go:31] will retry after 379.464834ms: missing components: kube-dns
	I1027 23:24:52.436838 1355720 system_pods.go:86] 8 kube-system pods found
	I1027 23:24:52.436873 1355720 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:24:52.436881 1355720 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:24:52.436887 1355720 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:24:52.436891 1355720 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running
	I1027 23:24:52.436895 1355720 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running
	I1027 23:24:52.436899 1355720 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:24:52.436907 1355720 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running
	I1027 23:24:52.436911 1355720 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Running
	I1027 23:24:52.436919 1355720 system_pods.go:126] duration metric: took 591.88821ms to wait for k8s-apps to be running ...
	I1027 23:24:52.436927 1355720 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:24:52.436982 1355720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:24:52.461262 1355720 system_svc.go:56] duration metric: took 24.323971ms WaitForService to wait for kubelet
	I1027 23:24:52.461359 1355720 kubeadm.go:587] duration metric: took 15.376371831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:24:52.461397 1355720 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:24:52.465404 1355720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:24:52.465434 1355720 node_conditions.go:123] node cpu capacity is 2
	I1027 23:24:52.465445 1355720 node_conditions.go:105] duration metric: took 4.027593ms to run NodePressure ...
	I1027 23:24:52.465458 1355720 start.go:242] waiting for startup goroutines ...
	I1027 23:24:52.465465 1355720 start.go:247] waiting for cluster config update ...
	I1027 23:24:52.465476 1355720 start.go:256] writing updated cluster config ...
	I1027 23:24:52.465763 1355720 ssh_runner.go:195] Run: rm -f paused
	I1027 23:24:52.471293 1355720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:52.488064 1355720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.494269 1355720 pod_ready.go:94] pod "coredns-66bc5c9577-mzm5d" is "Ready"
	I1027 23:24:53.494295 1355720 pod_ready.go:86] duration metric: took 1.006194819s for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.497234 1355720 pod_ready.go:83] waiting for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.503606 1355720 pod_ready.go:94] pod "etcd-no-preload-947754" is "Ready"
	I1027 23:24:53.503633 1355720 pod_ready.go:86] duration metric: took 6.368155ms for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.506995 1355720 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.514313 1355720 pod_ready.go:94] pod "kube-apiserver-no-preload-947754" is "Ready"
	I1027 23:24:53.514343 1355720 pod_ready.go:86] duration metric: took 7.324353ms for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.517536 1355720 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.691896 1355720 pod_ready.go:94] pod "kube-controller-manager-no-preload-947754" is "Ready"
	I1027 23:24:53.691925 1355720 pod_ready.go:86] duration metric: took 174.356447ms for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:53.893112 1355720 pod_ready.go:83] waiting for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.292015 1355720 pod_ready.go:94] pod "kube-proxy-29878" is "Ready"
	I1027 23:24:54.292043 1355720 pod_ready.go:86] duration metric: took 398.88697ms for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.491334 1355720 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.892039 1355720 pod_ready.go:94] pod "kube-scheduler-no-preload-947754" is "Ready"
	I1027 23:24:54.892073 1355720 pod_ready.go:86] duration metric: took 400.705235ms for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:24:54.892086 1355720 pod_ready.go:40] duration metric: took 2.420709593s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:24:55.020810 1355720 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:24:55.026273 1355720 out.go:179] * Done! kubectl is now configured to use "no-preload-947754" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.552682784Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=72907c78-7598-44f5-8cb6-2f4c52dd3df6 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.554367246Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2ea2422e-85ae-4019-94e1-b3f4c907d017 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.555423294Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper" id=f3250598-8ba6-4bd3-8f28-77dd7e5681e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.555559181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.563001969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.563664576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.581844188Z" level=info msg="Created container 09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper" id=f3250598-8ba6-4bd3-8f28-77dd7e5681e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.583220364Z" level=info msg="Starting container: 09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386" id=5b7606a7-696e-4d3a-92d9-4c288ec398f6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.586904428Z" level=info msg="Started container" PID=1665 containerID=09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper id=5b7606a7-696e-4d3a-92d9-4c288ec398f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd
	Oct 27 23:24:44 old-k8s-version-477179 conmon[1663]: conmon 09ab5a46773af9e2116c <ninfo>: container 1665 exited with status 1
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.941765342Z" level=info msg="Removing container: cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84" id=d19c2299-8130-4e02-8139-b0fca3f4e3de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.951686097Z" level=info msg="Error loading conmon cgroup of container cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84: cgroup deleted" id=d19c2299-8130-4e02-8139-b0fca3f4e3de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:24:44 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:44.95750643Z" level=info msg="Removed container cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x/dashboard-metrics-scraper" id=d19c2299-8130-4e02-8139-b0fca3f4e3de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.874672988Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.880015964Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.880049893Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.880073327Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.883264002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.883301844Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.883325951Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.887586057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.887619148Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.887644585Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.891185656Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:24:47 old-k8s-version-477179 crio[650]: time="2025-10-27T23:24:47.89121371Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	09ab5a46773af       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   2                   975020566fbb0       dashboard-metrics-scraper-5f989dc9cf-7248x       kubernetes-dashboard
	9cda4094bfed5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   15283197f0e51       storage-provisioner                              kube-system
	76f54d3dbd7fd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   23 seconds ago       Running             kubernetes-dashboard        0                   04a5eb8aafba2       kubernetes-dashboard-8694d4445c-hnmb4            kubernetes-dashboard
	266c1e8038479       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago       Running             kube-proxy                  1                   be3041f022c27       kube-proxy-t6hvl                                 kube-system
	08a2078427d64       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   77d3da93270f4       busybox                                          default
	8dd45d72c4796       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago       Running             coredns                     1                   1568be3a37133       coredns-5dd5756b68-zmrh9                         kube-system
	f6678a4bfdea0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   d37a5b86521a8       kindnet-z26d6                                    kube-system
	2aab2984cba3a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   15283197f0e51       storage-provisioner                              kube-system
	31d2036be45f7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   a5cd0a5f75890       kube-apiserver-old-k8s-version-477179            kube-system
	4cc4ea0f92239       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   8c87e0807307c       etcd-old-k8s-version-477179                      kube-system
	4df94ad74d55d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   fec393af28f76       kube-controller-manager-old-k8s-version-477179   kube-system
	0daf78b0c28b9       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   a2201c254522e       kube-scheduler-old-k8s-version-477179            kube-system
	
	
	==> coredns [8dd45d72c479651ba09d2be7f8a62f2c5eb7ccd81bf397242248fd631ff5c1e2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34565 - 4094 "HINFO IN 7565624524836270135.3906192045454744344. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016630036s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-477179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-477179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=old-k8s-version-477179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_22_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:22:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-477179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:24:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:22:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:24:27 +0000   Mon, 27 Oct 2025 23:23:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-477179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c71561b3-c618-4514-9439-9c8988ccb8a0
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-zmrh9                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-old-k8s-version-477179                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-z26d6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-477179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-477179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-t6hvl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-477179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7248x        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-hnmb4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 50s                    kube-proxy       
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-477179 event: Registered Node old-k8s-version-477179 in Controller
	  Normal  NodeReady                99s                    kubelet          Node old-k8s-version-477179 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-477179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node old-k8s-version-477179 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                    node-controller  Node old-k8s-version-477179 event: Registered Node old-k8s-version-477179 in Controller
	
	
	==> dmesg <==
	[  +1.719322] overlayfs: idmapped layers are currently not supported
	[Oct27 23:00] overlayfs: idmapped layers are currently not supported
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4cc4ea0f92239fc9155b151efab480bb22dbf8b3551f7c315daae1493853f27f] <==
	{"level":"info","ts":"2025-10-27T23:23:59.034819Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T23:23:59.034864Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T23:23:59.035176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-27T23:23:59.035291Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-27T23:23:59.035448Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:23:59.035505Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T23:23:59.066305Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T23:23:59.082968Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T23:23:59.112141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T23:23:59.081669Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T23:23:59.112247Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T23:23:59.926254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T23:23:59.926358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T23:23:59.926722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T23:23:59.926787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.926821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.926871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.926919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T23:23:59.93328Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-477179 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T23:23:59.933452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T23:23:59.934512Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-27T23:23:59.934558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T23:23:59.935359Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T23:23:59.960799Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T23:23:59.960891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:24:58 up  6:07,  0 user,  load average: 3.56, 3.61, 3.13
	Linux old-k8s-version-477179 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f6678a4bfdea01a536baa38f2f64d3a12a42d128714d4a3edd59407299000596] <==
	I1027 23:24:07.640847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:24:07.651060       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:24:07.651410       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:24:07.651425       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:24:07.651557       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:24:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:24:07.915836       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:24:07.915954       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:24:07.917090       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:24:07.917294       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:24:37.916073       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:24:37.927193       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:24:37.927298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:24:37.927420       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 23:24:39.418045       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:24:39.418141       1 metrics.go:72] Registering metrics
	I1027 23:24:39.418262       1 controller.go:711] "Syncing nftables rules"
	I1027 23:24:47.874327       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:24:47.874398       1 main.go:301] handling current node
	I1027 23:24:57.875722       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:24:57.875752       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31d2036be45f7a86c828442bcf45019e9bddf4f8b4f0001aa49eaad623860144] <==
	I1027 23:24:06.676875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 23:24:06.678134       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 23:24:06.681833       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1027 23:24:06.681859       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 23:24:06.683931       1 aggregator.go:166] initial CRD sync complete...
	I1027 23:24:06.683966       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 23:24:06.683973       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:24:06.683985       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:24:06.930156       1 trace.go:236] Trace[380303693]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:811577d7-4c42-451b-a3d9-a1a89005eef5,client:192.168.85.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (27-Oct-2025 23:24:06.405) (total time: 524ms):
	Trace[380303693]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-477179" already exists 94ms (23:24:06.930)
	Trace[380303693]: [524.229405ms] [524.229405ms] END
	I1027 23:24:06.943140       1 trace.go:236] Trace[1142693778]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ff83b1b8-36b6-4c16-89e0-68d941e611a3,client:192.168.85.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (27-Oct-2025 23:24:06.380) (total time: 562ms):
	Trace[1142693778]: [562.927011ms] [562.927011ms] END
	I1027 23:24:07.030421       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1027 23:24:07.045458       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:24:10.758608       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 23:24:10.828737       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 23:24:10.857429       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:24:10.872044       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:24:10.882978       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 23:24:10.942398       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.73.173"}
	I1027 23:24:10.966524       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.35.79"}
	I1027 23:24:20.389660       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:24:20.426834       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 23:24:20.431618       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4df94ad74d55d5841a5ebd671ae3a091cbc30efa3d08697d8baed42fd415cbf1] <==
	I1027 23:24:20.479000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.765µs"
	I1027 23:24:20.494340       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 23:24:20.510491       1 shared_informer.go:318] Caches are synced for stateful set
	I1027 23:24:20.515158       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-7248x"
	I1027 23:24:20.532906       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-hnmb4"
	I1027 23:24:20.558442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.66317ms"
	I1027 23:24:20.575571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.71041ms"
	I1027 23:24:20.588710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="30.139708ms"
	I1027 23:24:20.588854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.824µs"
	I1027 23:24:20.646333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.58189ms"
	I1027 23:24:20.649238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.108µs"
	I1027 23:24:20.649438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.254µs"
	I1027 23:24:20.684949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.228µs"
	I1027 23:24:20.835517       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 23:24:20.910941       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 23:24:20.910971       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 23:24:28.909720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.109µs"
	I1027 23:24:29.919838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.095µs"
	I1027 23:24:30.958528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="139.998µs"
	I1027 23:24:34.968024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.419358ms"
	I1027 23:24:34.968201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.996µs"
	I1027 23:24:39.147489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.994244ms"
	I1027 23:24:39.147823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.331µs"
	I1027 23:24:44.960722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.705µs"
	I1027 23:24:50.867545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.949µs"
	
	
	==> kube-proxy [266c1e8038479147b3192edbb4966e537d86784dad76d9a4aa532c21689fc44c] <==
	I1027 23:24:08.457178       1 server_others.go:69] "Using iptables proxy"
	I1027 23:24:08.540892       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1027 23:24:08.603532       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:24:08.605450       1 server_others.go:152] "Using iptables Proxier"
	I1027 23:24:08.605537       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 23:24:08.605570       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 23:24:08.605627       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 23:24:08.605864       1 server.go:846] "Version info" version="v1.28.0"
	I1027 23:24:08.606220       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:24:08.622222       1 config.go:188] "Starting service config controller"
	I1027 23:24:08.622320       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 23:24:08.622364       1 config.go:97] "Starting endpoint slice config controller"
	I1027 23:24:08.622460       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 23:24:08.626073       1 config.go:315] "Starting node config controller"
	I1027 23:24:08.626155       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 23:24:08.723363       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 23:24:08.723410       1 shared_informer.go:318] Caches are synced for service config
	I1027 23:24:08.728777       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0daf78b0c28b92f6f69bc82b09d8267753a05593afe602cb3abe6fd2fe226dd4] <==
	I1027 23:24:01.093399       1 serving.go:348] Generated self-signed cert in-memory
	W1027 23:24:06.417479       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 23:24:06.417586       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 23:24:06.417620       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 23:24:06.417686       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 23:24:06.636641       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 23:24:06.638205       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:24:06.640409       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 23:24:06.640576       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:24:06.640620       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 23:24:06.640667       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 23:24:06.744564       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 23:24:09 old-k8s-version-477179 kubelet[775]: I1027 23:24:09.097430     775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.546519     775 topology_manager.go:215] "Topology Admit Handler" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-7248x"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.579037     775 topology_manager.go:215] "Topology Admit Handler" podUID="9af278b5-b4c3-4acf-a098-ffd7b10c75e5" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-hnmb4"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.722740     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhm8\" (UniqueName: \"kubernetes.io/projected/d7eada63-c5a5-4c7b-85da-87f01144acad-kube-api-access-wwhm8\") pod \"dashboard-metrics-scraper-5f989dc9cf-7248x\" (UID: \"d7eada63-c5a5-4c7b-85da-87f01144acad\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.722975     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777qz\" (UniqueName: \"kubernetes.io/projected/9af278b5-b4c3-4acf-a098-ffd7b10c75e5-kube-api-access-777qz\") pod \"kubernetes-dashboard-8694d4445c-hnmb4\" (UID: \"9af278b5-b4c3-4acf-a098-ffd7b10c75e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hnmb4"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.723088     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9af278b5-b4c3-4acf-a098-ffd7b10c75e5-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-hnmb4\" (UID: \"9af278b5-b4c3-4acf-a098-ffd7b10c75e5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hnmb4"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: I1027 23:24:20.723200     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d7eada63-c5a5-4c7b-85da-87f01144acad-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7248x\" (UID: \"d7eada63-c5a5-4c7b-85da-87f01144acad\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x"
	Oct 27 23:24:20 old-k8s-version-477179 kubelet[775]: W1027 23:24:20.878676     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/431f1160e1d33bff6cddecce49db6c44fb765c51ef5962fd5038c980e7f31373/crio-975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd WatchSource:0}: Error finding container 975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd: Status 404 returned error can't find the container with id 975020566fbb0232a926eaad8a9e870fa3d83321555aadc418e0e306c41d5cfd
	Oct 27 23:24:28 old-k8s-version-477179 kubelet[775]: I1027 23:24:28.885646     775 scope.go:117] "RemoveContainer" containerID="c7c0eda28b5e0bd516731e19c372b7cbbefc18494146c5179c2fd902e0c632bf"
	Oct 27 23:24:29 old-k8s-version-477179 kubelet[775]: I1027 23:24:29.893089     775 scope.go:117] "RemoveContainer" containerID="c7c0eda28b5e0bd516731e19c372b7cbbefc18494146c5179c2fd902e0c632bf"
	Oct 27 23:24:29 old-k8s-version-477179 kubelet[775]: I1027 23:24:29.893439     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:29 old-k8s-version-477179 kubelet[775]: E1027 23:24:29.893789     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:30 old-k8s-version-477179 kubelet[775]: I1027 23:24:30.896888     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:30 old-k8s-version-477179 kubelet[775]: E1027 23:24:30.897340     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:37 old-k8s-version-477179 kubelet[775]: I1027 23:24:37.920583     775 scope.go:117] "RemoveContainer" containerID="2aab2984cba3a6ac659a5293f3fc709521e8bf4e3e62a456804c373f3774d3f5"
	Oct 27 23:24:37 old-k8s-version-477179 kubelet[775]: I1027 23:24:37.977637     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-hnmb4" podStartSLOduration=4.230417368 podCreationTimestamp="2025-10-27 23:24:20 +0000 UTC" firstStartedPulling="2025-10-27 23:24:20.931343835 +0000 UTC m=+23.665230629" lastFinishedPulling="2025-10-27 23:24:34.678511143 +0000 UTC m=+37.412397945" observedRunningTime="2025-10-27 23:24:34.933157481 +0000 UTC m=+37.667044283" watchObservedRunningTime="2025-10-27 23:24:37.977584684 +0000 UTC m=+40.711471478"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: I1027 23:24:44.551984     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: I1027 23:24:44.940387     775 scope.go:117] "RemoveContainer" containerID="cd2d1065a5bf781083ef9f3266746e55788736e6bf5341d66216f56b3203be84"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: I1027 23:24:44.940623     775 scope.go:117] "RemoveContainer" containerID="09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386"
	Oct 27 23:24:44 old-k8s-version-477179 kubelet[775]: E1027 23:24:44.940949     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:50 old-k8s-version-477179 kubelet[775]: I1027 23:24:50.850042     775 scope.go:117] "RemoveContainer" containerID="09ab5a46773af9e2116c4944c8fbce13ecce96bc929057f176567b4da1e3a386"
	Oct 27 23:24:50 old-k8s-version-477179 kubelet[775]: E1027 23:24:50.850945     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7248x_kubernetes-dashboard(d7eada63-c5a5-4c7b-85da-87f01144acad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7248x" podUID="d7eada63-c5a5-4c7b-85da-87f01144acad"
	Oct 27 23:24:53 old-k8s-version-477179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:24:53 old-k8s-version-477179 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:24:53 old-k8s-version-477179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [76f54d3dbd7fd7c913b3758a5fcab315050789c5914aa4cdea07154989d5e5c1] <==
	2025/10/27 23:24:34 Starting overwatch
	2025/10/27 23:24:34 Using namespace: kubernetes-dashboard
	2025/10/27 23:24:34 Using in-cluster config to connect to apiserver
	2025/10/27 23:24:34 Using secret token for csrf signing
	2025/10/27 23:24:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:24:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:24:34 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 23:24:34 Generating JWE encryption key
	2025/10/27 23:24:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:24:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:24:35 Initializing JWE encryption key from synchronized object
	2025/10/27 23:24:35 Creating in-cluster Sidecar client
	2025/10/27 23:24:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:24:35 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [2aab2984cba3a6ac659a5293f3fc709521e8bf4e3e62a456804c373f3774d3f5] <==
	I1027 23:24:07.610250       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:24:37.613524       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9cda4094bfed5a639c35f0a169fc39a8317d45025263f0528ba134c879485b25] <==
	I1027 23:24:38.044406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:24:38.079720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:24:38.082595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 23:24:55.490624       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:24:55.491033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60ebd7b9-9b45-4373-8eb9-0ab942bf1b51", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-477179_3b29ac17-0d70-46cf-8990-79be41ea6022 became leader
	I1027 23:24:55.492599       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-477179_3b29ac17-0d70-46cf-8990-79be41ea6022!
	I1027 23:24:55.594073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-477179_3b29ac17-0d70-46cf-8990-79be41ea6022!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477179 -n old-k8s-version-477179
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477179 -n old-k8s-version-477179: exit status 2 (389.229232ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-477179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (338.712309ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:25:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-947754 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-947754 describe deploy/metrics-server -n kube-system: exit status 1 (114.224275ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-947754 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-947754
E1027 23:25:03.984729 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/custom-flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect no-preload-947754:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd",
	        "Created": "2025-10-27T23:23:41.900111117Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1356022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:23:41.969016019Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/hosts",
	        "LogPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd-json.log",
	        "Name": "/no-preload-947754",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-947754:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-947754",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd",
	                "LowerDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-947754",
	                "Source": "/var/lib/docker/volumes/no-preload-947754/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-947754",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-947754",
	                "name.minikube.sigs.k8s.io": "no-preload-947754",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7a6e6a53170007890c211baea02d037685e1157e75774428975376e7562bcb1",
	            "SandboxKey": "/var/run/docker/netns/d7a6e6a53170",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34564"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34565"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34566"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34567"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-947754": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:7d:3a:3f:8c:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0cbf6a9d973fe230cfa5a9e9384a72057cae1f71fd4d9191f2ef370fd36289f9",
	                    "EndpointID": "ed1cd43426fc8023202a8d3ece966da7c75df8742dac432727e0adc5457d18a6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-947754",
	                        "c73891b58ca0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-947754 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-947754 logs -n 25: (1.41509276s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-440075 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-477179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo containerd config dump                                                                                                                                                                                                  │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ stop    │ -p old-k8s-version-477179 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crio config                                                                                                                                                                                                             │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ delete  │ -p bridge-440075                                                                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:25 UTC │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322     │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:25:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:25:03.251043 1362600 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:25:03.251167 1362600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:25:03.251179 1362600 out.go:374] Setting ErrFile to fd 2...
	I1027 23:25:03.251185 1362600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:25:03.251439 1362600 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:25:03.251943 1362600 out.go:368] Setting JSON to false
	I1027 23:25:03.252876 1362600 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22053,"bootTime":1761585451,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:25:03.252950 1362600 start.go:143] virtualization:  
	I1027 23:25:03.256918 1362600 out.go:179] * [embed-certs-790322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:25:03.260132 1362600 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:25:03.260192 1362600 notify.go:221] Checking for updates...
	I1027 23:25:03.266302 1362600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:25:03.269374 1362600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:03.272436 1362600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:25:03.275442 1362600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:25:03.278458 1362600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:25:03.282001 1362600 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:03.282094 1362600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:25:03.315136 1362600 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:25:03.315267 1362600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:25:03.392330 1362600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:25:03.383036954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:25:03.392435 1362600 docker.go:318] overlay module found
	I1027 23:25:03.395507 1362600 out.go:179] * Using the docker driver based on user configuration
	I1027 23:25:03.398363 1362600 start.go:307] selected driver: docker
	I1027 23:25:03.398424 1362600 start.go:928] validating driver "docker" against <nil>
	I1027 23:25:03.398447 1362600 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:25:03.399234 1362600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:25:03.515243 1362600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:25:03.505566807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:25:03.515380 1362600 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 23:25:03.515615 1362600 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:25:03.518696 1362600 out.go:179] * Using Docker driver with root privileges
	I1027 23:25:03.521593 1362600 cni.go:84] Creating CNI manager for ""
	I1027 23:25:03.521721 1362600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:25:03.521739 1362600 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 23:25:03.521818 1362600 start.go:351] cluster config:
	{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:25:03.525001 1362600 out.go:179] * Starting "embed-certs-790322" primary control-plane node in "embed-certs-790322" cluster
	I1027 23:25:03.527945 1362600 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:25:03.530938 1362600 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:25:03.533794 1362600 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:25:03.533859 1362600 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:25:03.533871 1362600 cache.go:59] Caching tarball of preloaded images
	I1027 23:25:03.533966 1362600 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:25:03.533981 1362600 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:25:03.534090 1362600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json ...
	I1027 23:25:03.534113 1362600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json: {Name:mka1aab8020ff97f43150affc26d5349eac709ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:03.534283 1362600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:25:03.566235 1362600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:25:03.566256 1362600 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:25:03.566271 1362600 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:25:03.566304 1362600 start.go:360] acquireMachinesLock for embed-certs-790322: {Name:mk0a741ca206e2e37bd9112a34c7fc5ed8359e78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:03.566514 1362600 start.go:364] duration metric: took 193.225µs to acquireMachinesLock for "embed-certs-790322"
	I1027 23:25:03.566549 1362600 start.go:93] Provisioning new machine with config: &{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:25:03.566630 1362600 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 27 23:24:51 no-preload-947754 crio[837]: time="2025-10-27T23:24:51.931199516Z" level=info msg="Created container 76ab5e5df33c5b68558f4b4e1f4a221da173330565f69da40b5a592e58c576ba: kube-system/coredns-66bc5c9577-mzm5d/coredns" id=83432098-0af3-4a08-90a4-1873e7c2300f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:24:51 no-preload-947754 crio[837]: time="2025-10-27T23:24:51.936414909Z" level=info msg="Starting container: 76ab5e5df33c5b68558f4b4e1f4a221da173330565f69da40b5a592e58c576ba" id=27750d9b-9bec-4831-8e98-5700428e8c5f name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:24:51 no-preload-947754 crio[837]: time="2025-10-27T23:24:51.938186035Z" level=info msg="Started container" PID=2478 containerID=76ab5e5df33c5b68558f4b4e1f4a221da173330565f69da40b5a592e58c576ba description=kube-system/coredns-66bc5c9577-mzm5d/coredns id=27750d9b-9bec-4831-8e98-5700428e8c5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6fdbb48454c01568b040f733bfefc3699068964dcb72217ca9a49c45ff252a0
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.618866829Z" level=info msg="Running pod sandbox: default/busybox/POD" id=200d9a54-1598-4f7a-aad1-af7bc5610e69 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.618956833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.638519693Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0fbbd52dad3725710aa2bf79ab291f8fbdabf9b7d9a92420d2355c13df4b15de UID:436727ba-f898-49e4-ae12-49daa555d6ba NetNS:/var/run/netns/18c74af3-2687-48ca-921f-4757bbf2c6f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014a6708}] Aliases:map[]}"
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.63870688Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.648980748Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0fbbd52dad3725710aa2bf79ab291f8fbdabf9b7d9a92420d2355c13df4b15de UID:436727ba-f898-49e4-ae12-49daa555d6ba NetNS:/var/run/netns/18c74af3-2687-48ca-921f-4757bbf2c6f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014a6708}] Aliases:map[]}"
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.649292104Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.655840356Z" level=info msg="Ran pod sandbox 0fbbd52dad3725710aa2bf79ab291f8fbdabf9b7d9a92420d2355c13df4b15de with infra container: default/busybox/POD" id=200d9a54-1598-4f7a-aad1-af7bc5610e69 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.657762033Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c82346d-8296-4c6d-8e15-5e003c597d94 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.657939685Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9c82346d-8296-4c6d-8e15-5e003c597d94 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.657982672Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9c82346d-8296-4c6d-8e15-5e003c597d94 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.661055995Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac16f243-5a6b-46fa-a5f3-907685299068 name=/runtime.v1.ImageService/PullImage
	Oct 27 23:24:55 no-preload-947754 crio[837]: time="2025-10-27T23:24:55.662535064Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.774154292Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=ac16f243-5a6b-46fa-a5f3-907685299068 name=/runtime.v1.ImageService/PullImage
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.775321094Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8be44984-f48a-4988-8e7d-767a4c682e73 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.778877148Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0015977e-5821-4f62-9caf-c5e8760cd74a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.78695311Z" level=info msg="Creating container: default/busybox/busybox" id=e253d6cf-ae02-40a6-81dd-80e8b5c85596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.78738085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.797680285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.798811337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.822560743Z" level=info msg="Created container ba26781eb20eebbb42bcacfeccb3260ced852b0feb4d9720a350eaa27ccf1df9: default/busybox/busybox" id=e253d6cf-ae02-40a6-81dd-80e8b5c85596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.82551276Z" level=info msg="Starting container: ba26781eb20eebbb42bcacfeccb3260ced852b0feb4d9720a350eaa27ccf1df9" id=421df27a-e5f2-4993-9164-12886929b8b5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:24:57 no-preload-947754 crio[837]: time="2025-10-27T23:24:57.829708586Z" level=info msg="Started container" PID=2528 containerID=ba26781eb20eebbb42bcacfeccb3260ced852b0feb4d9720a350eaa27ccf1df9 description=default/busybox/busybox id=421df27a-e5f2-4993-9164-12886929b8b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fbbd52dad3725710aa2bf79ab291f8fbdabf9b7d9a92420d2355c13df4b15de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ba26781eb20ee       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   0fbbd52dad372       busybox                                     default
	76ab5e5df33c5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   e6fdbb48454c0       coredns-66bc5c9577-mzm5d                    kube-system
	288d00dfa2d6b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   da538b476fcb6       storage-provisioner                         kube-system
	6ce1cc59994f2       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   0f79bb0b4ccd4       kindnet-m7l4b                               kube-system
	f5b4ec9c72667       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   550f89e032f0e       kube-proxy-29878                            kube-system
	27c29b2d458c3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   427e83d1b6269       kube-controller-manager-no-preload-947754   kube-system
	f2fa55c9f4a6c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   d9b03567939cc       kube-apiserver-no-preload-947754            kube-system
	fc8e8663146a2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   40ad6ad8cf997       kube-scheduler-no-preload-947754            kube-system
	768db72369789       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   8f5d6c3a14582       etcd-no-preload-947754                      kube-system
	
	
	==> coredns [76ab5e5df33c5b68558f4b4e1f4a221da173330565f69da40b5a592e58c576ba] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49274 - 22628 "HINFO IN 2846438802542515798.36694285293620566. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.018573317s
	
	
	==> describe nodes <==
	Name:               no-preload-947754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-947754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=no-preload-947754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:24:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-947754
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:25:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:25:02 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:25:02 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:25:02 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:25:02 +0000   Mon, 27 Oct 2025 23:24:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-947754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c8ec03af-833c-45dd-b53c-bcc66992da89
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-mzm5d                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-947754                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-m7l4b                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-947754             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-947754    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-29878                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-947754             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x7 over 45s)  kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x7 over 45s)  kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-947754 event: Registered Node no-preload-947754 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-947754 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.719322] overlayfs: idmapped layers are currently not supported
	[Oct27 23:00] overlayfs: idmapped layers are currently not supported
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [768db72369789bd5e9631f8a5c7f03a4ef677f273e5f854ed8a496eca29dfb7f] <==
	{"level":"warn","ts":"2025-10-27T23:24:24.895966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:24.960284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:24.991635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.041539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.085391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.136888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.159578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.200421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.250249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.305848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.349175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.398912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.474939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.575316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.620956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.681208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.753160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.784036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.852543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:25.931585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:26.036057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:26.053475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:26.149460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:26.191847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:24:26.479671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56818","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:25:05 up  6:07,  0 user,  load average: 3.60, 3.62, 3.13
	Linux no-preload-947754 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ce1cc59994f21fdd266cd61ce1c87bcb81c471866059ab7097f86362f3358ef] <==
	I1027 23:24:41.118060       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:24:41.118955       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:24:41.119137       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:24:41.119186       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:24:41.119226       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:24:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:24:41.321544       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:24:41.321627       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:24:41.321662       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:24:41.327982       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 23:24:41.526830       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:24:41.526963       1 metrics.go:72] Registering metrics
	I1027 23:24:41.527050       1 controller.go:711] "Syncing nftables rules"
	I1027 23:24:51.327807       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:24:51.327865       1 main.go:301] handling current node
	I1027 23:25:01.321339       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:25:01.321382       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2fa55c9f4a6c9552a43381557a40329d7f79bd91c6bd5c489beed4b4bb74b33] <==
	I1027 23:24:28.268092       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:24:28.275792       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:24:28.316196       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 23:24:28.317807       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:24:28.399540       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:24:28.403234       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:24:28.527236       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:24:28.706002       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 23:24:28.737168       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 23:24:28.737191       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:24:29.959251       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:24:30.072272       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:24:30.187384       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 23:24:30.196831       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 23:24:30.198252       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:24:30.207223       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:24:30.922930       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:24:31.557318       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:24:31.604045       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 23:24:31.631572       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 23:24:36.567989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:24:36.984806       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:24:36.991885       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:24:37.017297       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1027 23:25:03.495409       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:38576: use of closed network connection
	
	
	==> kube-controller-manager [27c29b2d458c355a63a743a20ad97f7a10e8d2c225adcd8aad530d3154fa9f11] <==
	I1027 23:24:35.930748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 23:24:35.930798       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 23:24:35.930833       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 23:24:35.930856       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 23:24:35.930862       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 23:24:35.930868       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 23:24:35.931258       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 23:24:35.934483       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:24:35.934690       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:24:35.935046       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-947754"
	I1027 23:24:35.935113       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 23:24:35.947818       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:24:35.951039       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-947754" podCIDRs=["10.244.0.0/24"]
	I1027 23:24:35.958051       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 23:24:35.958120       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:24:35.958290       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:24:35.959951       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:24:35.960084       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 23:24:35.960407       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:24:35.960580       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:24:35.962099       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 23:24:35.962475       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:24:35.962495       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 23:24:35.963594       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:24:55.938582       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f5b4ec9c7266796628059549ab08b5a3c6459f0d4b25d80cf98513294322e020] <==
	I1027 23:24:37.677063       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:24:37.783870       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:24:37.886480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:24:37.886523       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:24:37.886593       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:24:38.061616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:24:38.061669       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:24:38.089809       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:24:38.090182       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:24:38.090217       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:24:38.091821       1 config.go:200] "Starting service config controller"
	I1027 23:24:38.091834       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:24:38.091852       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:24:38.091856       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:24:38.091870       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:24:38.091874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:24:38.092655       1 config.go:309] "Starting node config controller"
	I1027 23:24:38.092663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:24:38.092670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:24:38.192273       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:24:38.192309       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:24:38.192351       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fc8e8663146a2d486cf379b54cb0ded7d5b1d9c103681f60ab02b139d71e2c3c] <==
	I1027 23:24:29.575771       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:24:29.597751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:24:29.597857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:24:29.598752       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:24:29.598860       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1027 23:24:29.628795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:24:29.629185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 23:24:29.629258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:24:29.629328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 23:24:29.635073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:24:29.635212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:24:29.635289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 23:24:29.635440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 23:24:29.635575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:24:29.635620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 23:24:29.635667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:24:29.635712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 23:24:29.635755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 23:24:29.635798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 23:24:29.635839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 23:24:29.636008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 23:24:29.636053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 23:24:29.636107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 23:24:29.636156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1027 23:24:31.199181       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:24:32 no-preload-947754 kubelet[1989]: I1027 23:24:32.809237    1989 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 23:24:33 no-preload-947754 kubelet[1989]: I1027 23:24:33.063993    1989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-947754" podStartSLOduration=1.063972867 podStartE2EDuration="1.063972867s" podCreationTimestamp="2025-10-27 23:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:24:33.031387701 +0000 UTC m=+1.536480298" watchObservedRunningTime="2025-10-27 23:24:33.063972867 +0000 UTC m=+1.569065455"
	Oct 27 23:24:33 no-preload-947754 kubelet[1989]: I1027 23:24:33.064708    1989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-947754" podStartSLOduration=1.064694698 podStartE2EDuration="1.064694698s" podCreationTimestamp="2025-10-27 23:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:24:33.06434431 +0000 UTC m=+1.569436915" watchObservedRunningTime="2025-10-27 23:24:33.064694698 +0000 UTC m=+1.569787295"
	Oct 27 23:24:36 no-preload-947754 kubelet[1989]: I1027 23:24:36.027764    1989 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 23:24:36 no-preload-947754 kubelet[1989]: I1027 23:24:36.028497    1989 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.171669    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/affca46b-bf6e-4821-a5e4-d7082cacdc04-xtables-lock\") pod \"kube-proxy-29878\" (UID: \"affca46b-bf6e-4821-a5e4-d7082cacdc04\") " pod="kube-system/kube-proxy-29878"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.171718    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/affca46b-bf6e-4821-a5e4-d7082cacdc04-lib-modules\") pod \"kube-proxy-29878\" (UID: \"affca46b-bf6e-4821-a5e4-d7082cacdc04\") " pod="kube-system/kube-proxy-29878"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.171743    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9zt\" (UniqueName: \"kubernetes.io/projected/affca46b-bf6e-4821-a5e4-d7082cacdc04-kube-api-access-qp9zt\") pod \"kube-proxy-29878\" (UID: \"affca46b-bf6e-4821-a5e4-d7082cacdc04\") " pod="kube-system/kube-proxy-29878"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.171768    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/affca46b-bf6e-4821-a5e4-d7082cacdc04-kube-proxy\") pod \"kube-proxy-29878\" (UID: \"affca46b-bf6e-4821-a5e4-d7082cacdc04\") " pod="kube-system/kube-proxy-29878"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.272107    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/baea7a6f-5608-4c48-bd36-abcd541e2d3b-cni-cfg\") pod \"kindnet-m7l4b\" (UID: \"baea7a6f-5608-4c48-bd36-abcd541e2d3b\") " pod="kube-system/kindnet-m7l4b"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.272165    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spb8r\" (UniqueName: \"kubernetes.io/projected/baea7a6f-5608-4c48-bd36-abcd541e2d3b-kube-api-access-spb8r\") pod \"kindnet-m7l4b\" (UID: \"baea7a6f-5608-4c48-bd36-abcd541e2d3b\") " pod="kube-system/kindnet-m7l4b"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.272207    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baea7a6f-5608-4c48-bd36-abcd541e2d3b-xtables-lock\") pod \"kindnet-m7l4b\" (UID: \"baea7a6f-5608-4c48-bd36-abcd541e2d3b\") " pod="kube-system/kindnet-m7l4b"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.272225    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baea7a6f-5608-4c48-bd36-abcd541e2d3b-lib-modules\") pod \"kindnet-m7l4b\" (UID: \"baea7a6f-5608-4c48-bd36-abcd541e2d3b\") " pod="kube-system/kindnet-m7l4b"
	Oct 27 23:24:37 no-preload-947754 kubelet[1989]: I1027 23:24:37.374058    1989 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 23:24:40 no-preload-947754 kubelet[1989]: I1027 23:24:40.473368    1989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29878" podStartSLOduration=3.4733509209999998 podStartE2EDuration="3.473350921s" podCreationTimestamp="2025-10-27 23:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:24:38.14547915 +0000 UTC m=+6.650571756" watchObservedRunningTime="2025-10-27 23:24:40.473350921 +0000 UTC m=+8.978443510"
	Oct 27 23:24:41 no-preload-947754 kubelet[1989]: I1027 23:24:41.180596    1989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m7l4b" podStartSLOduration=0.738308072 podStartE2EDuration="4.180559787s" podCreationTimestamp="2025-10-27 23:24:37 +0000 UTC" firstStartedPulling="2025-10-27 23:24:37.488121711 +0000 UTC m=+5.993214300" lastFinishedPulling="2025-10-27 23:24:40.930373418 +0000 UTC m=+9.435466015" observedRunningTime="2025-10-27 23:24:41.147012318 +0000 UTC m=+9.652104915" watchObservedRunningTime="2025-10-27 23:24:41.180559787 +0000 UTC m=+9.685652384"
	Oct 27 23:24:51 no-preload-947754 kubelet[1989]: I1027 23:24:51.468155    1989 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 23:24:51 no-preload-947754 kubelet[1989]: I1027 23:24:51.579742    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d8c57e3-c8ca-4466-9c32-fb227a39b7c5-tmp\") pod \"storage-provisioner\" (UID: \"7d8c57e3-c8ca-4466-9c32-fb227a39b7c5\") " pod="kube-system/storage-provisioner"
	Oct 27 23:24:51 no-preload-947754 kubelet[1989]: I1027 23:24:51.579975    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7af0a1a1-b33d-4152-ac15-91c2455b2d4c-config-volume\") pod \"coredns-66bc5c9577-mzm5d\" (UID: \"7af0a1a1-b33d-4152-ac15-91c2455b2d4c\") " pod="kube-system/coredns-66bc5c9577-mzm5d"
	Oct 27 23:24:51 no-preload-947754 kubelet[1989]: I1027 23:24:51.580016    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtzjb\" (UniqueName: \"kubernetes.io/projected/7d8c57e3-c8ca-4466-9c32-fb227a39b7c5-kube-api-access-wtzjb\") pod \"storage-provisioner\" (UID: \"7d8c57e3-c8ca-4466-9c32-fb227a39b7c5\") " pod="kube-system/storage-provisioner"
	Oct 27 23:24:51 no-preload-947754 kubelet[1989]: I1027 23:24:51.580043    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncsw8\" (UniqueName: \"kubernetes.io/projected/7af0a1a1-b33d-4152-ac15-91c2455b2d4c-kube-api-access-ncsw8\") pod \"coredns-66bc5c9577-mzm5d\" (UID: \"7af0a1a1-b33d-4152-ac15-91c2455b2d4c\") " pod="kube-system/coredns-66bc5c9577-mzm5d"
	Oct 27 23:24:52 no-preload-947754 kubelet[1989]: I1027 23:24:52.176492    1989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.176475529 podStartE2EDuration="14.176475529s" podCreationTimestamp="2025-10-27 23:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:24:52.176099934 +0000 UTC m=+20.681192523" watchObservedRunningTime="2025-10-27 23:24:52.176475529 +0000 UTC m=+20.681568126"
	Oct 27 23:24:53 no-preload-947754 kubelet[1989]: I1027 23:24:53.179720    1989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mzm5d" podStartSLOduration=16.179701297 podStartE2EDuration="16.179701297s" podCreationTimestamp="2025-10-27 23:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:24:52.195776257 +0000 UTC m=+20.700868854" watchObservedRunningTime="2025-10-27 23:24:53.179701297 +0000 UTC m=+21.684793886"
	Oct 27 23:24:55 no-preload-947754 kubelet[1989]: I1027 23:24:55.422278    1989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scq8\" (UniqueName: \"kubernetes.io/projected/436727ba-f898-49e4-ae12-49daa555d6ba-kube-api-access-7scq8\") pod \"busybox\" (UID: \"436727ba-f898-49e4-ae12-49daa555d6ba\") " pod="default/busybox"
	Oct 27 23:24:55 no-preload-947754 kubelet[1989]: W1027 23:24:55.654046    1989 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/crio-0fbbd52dad3725710aa2bf79ab291f8fbdabf9b7d9a92420d2355c13df4b15de WatchSource:0}: Error finding container 0fbbd52dad3725710aa2bf79ab291f8fbdabf9b7d9a92420d2355c13df4b15de: Status 404 returned error can't find the container with id 0fbbd52dad3725710aa2bf79ab291f8fbdabf9b7d9a92420d2355c13df4b15de
	
	
	==> storage-provisioner [288d00dfa2d6bf73af6db4473733a091b3c76f7e18c7aa70028ea05a8e21208a] <==
	I1027 23:24:51.888816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:24:51.913322       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:24:51.913393       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:24:51.925295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:24:51.939393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:24:51.941066       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:24:51.942373       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-947754_47cbbdf7-eec8-44d6-8bf4-d21649135bd4!
	I1027 23:24:51.943431       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77faea05-b4f8-4145-b717-91f936278f59", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-947754_47cbbdf7-eec8-44d6-8bf4-d21649135bd4 became leader
	W1027 23:24:51.946703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:24:51.954345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:24:52.043422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-947754_47cbbdf7-eec8-44d6-8bf4-d21649135bd4!
	W1027 23:24:53.958315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:24:53.966629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:24:55.979943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:24:55.989122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:24:57.992136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:24:58.001842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:25:00.050582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:25:00.067471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:25:02.075252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:25:02.080621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:25:04.085192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:25:04.093460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-947754 -n no-preload-947754
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-947754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-947754 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-947754 --alsologtostderr -v=1: exit status 80 (1.96477273s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-947754 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:26:25.664567 1367999 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:26:25.664766 1367999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:25.664794 1367999 out.go:374] Setting ErrFile to fd 2...
	I1027 23:26:25.664813 1367999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:25.665078 1367999 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:26:25.665364 1367999 out.go:368] Setting JSON to false
	I1027 23:26:25.665412 1367999 mustload.go:66] Loading cluster: no-preload-947754
	I1027 23:26:25.665825 1367999 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:26:25.666338 1367999 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:26:25.683413 1367999 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:26:25.683724 1367999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:25.742767 1367999 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 23:26:25.73295997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:25.743409 1367999 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-947754 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 23:26:25.751211 1367999 out.go:179] * Pausing node no-preload-947754 ... 
	I1027 23:26:25.754676 1367999 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:26:25.755018 1367999 ssh_runner.go:195] Run: systemctl --version
	I1027 23:26:25.755069 1367999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:26:25.772394 1367999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:26:25.881398 1367999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:26:25.897114 1367999 pause.go:52] kubelet running: true
	I1027 23:26:25.897190 1367999 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:26:26.169866 1367999 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:26:26.169964 1367999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:26:26.245212 1367999 cri.go:89] found id: "a9afcfa94ebd1357f2da7111c52cf9032a26396ad5338a0fbec038de3eb2dfd0"
	I1027 23:26:26.245238 1367999 cri.go:89] found id: "dce502a0987347d98c1fadd581f5383d9c39aebc92f303d3c2f85a014ca708fd"
	I1027 23:26:26.245244 1367999 cri.go:89] found id: "411070ec7a49e4f7f558d049d91a93e52b7f68d46532edcf9784b3a28da65fe6"
	I1027 23:26:26.245248 1367999 cri.go:89] found id: "72419b65a3b57a571d664d92c78cb819499e775deac68bc21b2c1056c29b67bc"
	I1027 23:26:26.245251 1367999 cri.go:89] found id: "f06617fb88cc02987c92472c35f87309338616d5e8dbb92304621d4132735bbb"
	I1027 23:26:26.245255 1367999 cri.go:89] found id: "9f23df14f2981858d26fa46d7024756723417501e064c150efed848207a12d0c"
	I1027 23:26:26.245258 1367999 cri.go:89] found id: "8d31e22ed9a43d906de78edcbe062d2a70163bf79ab57e9dd6ef2531387faeea"
	I1027 23:26:26.245261 1367999 cri.go:89] found id: "cf6586816133757006922d7552cfb82bf56a3f786053d6ff45e949dbf3a4d391"
	I1027 23:26:26.245264 1367999 cri.go:89] found id: "753952329c8042b52b9f0e7089396f8c95422ec863eda044f175ca5860a37dda"
	I1027 23:26:26.245270 1367999 cri.go:89] found id: "95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce"
	I1027 23:26:26.245274 1367999 cri.go:89] found id: "d820306abf607ac55bcab84f8735d57b9b838b6f2dcd5d7b45c692707223d95a"
	I1027 23:26:26.245277 1367999 cri.go:89] found id: ""
	I1027 23:26:26.245336 1367999 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:26:26.259845 1367999 retry.go:31] will retry after 338.699699ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:26:26Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:26:26.599290 1367999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:26:26.615838 1367999 pause.go:52] kubelet running: false
	I1027 23:26:26.615905 1367999 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:26:26.793523 1367999 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:26:26.793656 1367999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:26:26.863179 1367999 cri.go:89] found id: "a9afcfa94ebd1357f2da7111c52cf9032a26396ad5338a0fbec038de3eb2dfd0"
	I1027 23:26:26.863256 1367999 cri.go:89] found id: "dce502a0987347d98c1fadd581f5383d9c39aebc92f303d3c2f85a014ca708fd"
	I1027 23:26:26.863276 1367999 cri.go:89] found id: "411070ec7a49e4f7f558d049d91a93e52b7f68d46532edcf9784b3a28da65fe6"
	I1027 23:26:26.863294 1367999 cri.go:89] found id: "72419b65a3b57a571d664d92c78cb819499e775deac68bc21b2c1056c29b67bc"
	I1027 23:26:26.863313 1367999 cri.go:89] found id: "f06617fb88cc02987c92472c35f87309338616d5e8dbb92304621d4132735bbb"
	I1027 23:26:26.863343 1367999 cri.go:89] found id: "9f23df14f2981858d26fa46d7024756723417501e064c150efed848207a12d0c"
	I1027 23:26:26.863368 1367999 cri.go:89] found id: "8d31e22ed9a43d906de78edcbe062d2a70163bf79ab57e9dd6ef2531387faeea"
	I1027 23:26:26.863385 1367999 cri.go:89] found id: "cf6586816133757006922d7552cfb82bf56a3f786053d6ff45e949dbf3a4d391"
	I1027 23:26:26.863403 1367999 cri.go:89] found id: "753952329c8042b52b9f0e7089396f8c95422ec863eda044f175ca5860a37dda"
	I1027 23:26:26.863426 1367999 cri.go:89] found id: "95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce"
	I1027 23:26:26.863454 1367999 cri.go:89] found id: "d820306abf607ac55bcab84f8735d57b9b838b6f2dcd5d7b45c692707223d95a"
	I1027 23:26:26.863480 1367999 cri.go:89] found id: ""
	I1027 23:26:26.863544 1367999 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:26:26.874680 1367999 retry.go:31] will retry after 387.256217ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:26:26Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:26:27.262178 1367999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:26:27.276156 1367999 pause.go:52] kubelet running: false
	I1027 23:26:27.276244 1367999 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:26:27.460577 1367999 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:26:27.460692 1367999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:26:27.536219 1367999 cri.go:89] found id: "a9afcfa94ebd1357f2da7111c52cf9032a26396ad5338a0fbec038de3eb2dfd0"
	I1027 23:26:27.536241 1367999 cri.go:89] found id: "dce502a0987347d98c1fadd581f5383d9c39aebc92f303d3c2f85a014ca708fd"
	I1027 23:26:27.536247 1367999 cri.go:89] found id: "411070ec7a49e4f7f558d049d91a93e52b7f68d46532edcf9784b3a28da65fe6"
	I1027 23:26:27.536250 1367999 cri.go:89] found id: "72419b65a3b57a571d664d92c78cb819499e775deac68bc21b2c1056c29b67bc"
	I1027 23:26:27.536254 1367999 cri.go:89] found id: "f06617fb88cc02987c92472c35f87309338616d5e8dbb92304621d4132735bbb"
	I1027 23:26:27.536263 1367999 cri.go:89] found id: "9f23df14f2981858d26fa46d7024756723417501e064c150efed848207a12d0c"
	I1027 23:26:27.536302 1367999 cri.go:89] found id: "8d31e22ed9a43d906de78edcbe062d2a70163bf79ab57e9dd6ef2531387faeea"
	I1027 23:26:27.536311 1367999 cri.go:89] found id: "cf6586816133757006922d7552cfb82bf56a3f786053d6ff45e949dbf3a4d391"
	I1027 23:26:27.536316 1367999 cri.go:89] found id: "753952329c8042b52b9f0e7089396f8c95422ec863eda044f175ca5860a37dda"
	I1027 23:26:27.536326 1367999 cri.go:89] found id: "95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce"
	I1027 23:26:27.536335 1367999 cri.go:89] found id: "d820306abf607ac55bcab84f8735d57b9b838b6f2dcd5d7b45c692707223d95a"
	I1027 23:26:27.536343 1367999 cri.go:89] found id: ""
	I1027 23:26:27.536419 1367999 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:26:27.557455 1367999 out.go:203] 
	W1027 23:26:27.560717 1367999 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:26:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:26:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 23:26:27.560746 1367999 out.go:285] * 
	* 
	W1027 23:26:27.570508 1367999 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:26:27.573759 1367999 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-947754 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-947754
helpers_test.go:243: (dbg) docker inspect no-preload-947754:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd",
	        "Created": "2025-10-27T23:23:41.900111117Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1365304,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:25:19.512967942Z",
	            "FinishedAt": "2025-10-27T23:25:18.463045545Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/hosts",
	        "LogPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd-json.log",
	        "Name": "/no-preload-947754",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-947754:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-947754",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd",
	                "LowerDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-947754",
	                "Source": "/var/lib/docker/volumes/no-preload-947754/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-947754",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-947754",
	                "name.minikube.sigs.k8s.io": "no-preload-947754",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29095ec715bd63aecaca87e1396283a0978bf22fd537dfb7541c3cebdeeca4c6",
	            "SandboxKey": "/var/run/docker/netns/29095ec715bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34579"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34580"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34581"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34582"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-947754": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:bf:18:ab:74:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0cbf6a9d973fe230cfa5a9e9384a72057cae1f71fd4d9191f2ef370fd36289f9",
	                    "EndpointID": "cda58594b629da7ad9391f41f1b6a4a11cf577c22c152a9f7c43e3064953a874",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-947754",
	                        "c73891b58ca0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754: exit status 2 (371.782808ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-947754 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-947754 logs -n 25: (1.420490877s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-440075 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo containerd config dump                                                                                                                                                                                                  │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ stop    │ -p old-k8s-version-477179 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crio config                                                                                                                                                                                                             │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ delete  │ -p bridge-440075                                                                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:25 UTC │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322     │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ stop    │ -p no-preload-947754 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:25:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:25:19.111291 1365166 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:25:19.111468 1365166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:25:19.111474 1365166 out.go:374] Setting ErrFile to fd 2...
	I1027 23:25:19.111480 1365166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:25:19.111742 1365166 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:25:19.112131 1365166 out.go:368] Setting JSON to false
	I1027 23:25:19.113032 1365166 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22068,"bootTime":1761585451,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:25:19.113122 1365166 start.go:143] virtualization:  
	I1027 23:25:19.116427 1365166 out.go:179] * [no-preload-947754] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:25:19.120355 1365166 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:25:19.120415 1365166 notify.go:221] Checking for updates...
	I1027 23:25:19.126141 1365166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:25:19.129084 1365166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:19.132145 1365166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:25:19.135156 1365166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:25:19.138679 1365166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:25:19.142000 1365166 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:19.142724 1365166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:25:19.183684 1365166 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:25:19.183794 1365166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:25:19.279983 1365166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:25:19.26766076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:25:19.280085 1365166 docker.go:318] overlay module found
	I1027 23:25:19.283260 1365166 out.go:179] * Using the docker driver based on existing profile
	I1027 23:25:19.286113 1365166 start.go:307] selected driver: docker
	I1027 23:25:19.286128 1365166 start.go:928] validating driver "docker" against &{Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:25:19.286238 1365166 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:25:19.287012 1365166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:25:19.387857 1365166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:25:19.375479233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:25:19.388209 1365166 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:25:19.388239 1365166 cni.go:84] Creating CNI manager for ""
	I1027 23:25:19.388298 1365166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:25:19.388345 1365166 start.go:351] cluster config:
	{Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:25:19.391534 1365166 out.go:179] * Starting "no-preload-947754" primary control-plane node in "no-preload-947754" cluster
	I1027 23:25:19.394308 1365166 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:25:19.397200 1365166 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:25:19.399834 1365166 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:25:19.399981 1365166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/config.json ...
	I1027 23:25:19.400314 1365166 cache.go:107] acquiring lock: {Name:mk1ee9dccf1fed0178bd5f318222a7ec38ae5783 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400392 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 23:25:19.400400 1365166 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.36µs
	I1027 23:25:19.400409 1365166 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 23:25:19.400421 1365166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:25:19.400627 1365166 cache.go:107] acquiring lock: {Name:mk71a4000b532d01990b206adaacbbe8b112aa34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400693 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 23:25:19.400702 1365166 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 79.763µs
	I1027 23:25:19.400709 1365166 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 23:25:19.400720 1365166 cache.go:107] acquiring lock: {Name:mk4be064d6d5271b09b25f994d534ea81d3dccd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400751 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 23:25:19.400756 1365166 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 37.202µs
	I1027 23:25:19.400762 1365166 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 23:25:19.400771 1365166 cache.go:107] acquiring lock: {Name:mka01faf9e1a67b26d1b66a062e4766564c5b49c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400796 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 23:25:19.400801 1365166 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 31.016µs
	I1027 23:25:19.400807 1365166 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 23:25:19.400816 1365166 cache.go:107] acquiring lock: {Name:mk4e70e86d91db286d3cdb14f85d915e029eb8d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400848 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 23:25:19.400853 1365166 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 38.672µs
	I1027 23:25:19.400859 1365166 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 23:25:19.400869 1365166 cache.go:107] acquiring lock: {Name:mke902fc6f90dc0050e0797caa43a275e42251d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400901 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 23:25:19.400906 1365166 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 37.843µs
	I1027 23:25:19.400911 1365166 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 23:25:19.400920 1365166 cache.go:107] acquiring lock: {Name:mk5fc1deed394b3a8d8e81fea34381b67cb3ab43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400948 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1027 23:25:19.400954 1365166 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.603µs
	I1027 23:25:19.400959 1365166 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 23:25:19.400968 1365166 cache.go:107] acquiring lock: {Name:mk2206d14b7d0df15fb0480fd42557fcc1e0691c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.401017 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 23:25:19.401023 1365166 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 55.607µs
	I1027 23:25:19.401029 1365166 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 23:25:19.401040 1365166 cache.go:87] Successfully saved all images to host disk.
	I1027 23:25:19.421698 1365166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:25:19.421717 1365166 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:25:19.421730 1365166 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:25:19.421758 1365166 start.go:360] acquireMachinesLock for no-preload-947754: {Name:mka89090453d09b34a498048eab7a34ab59dc927 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.421808 1365166 start.go:364] duration metric: took 35.677µs to acquireMachinesLock for "no-preload-947754"
	I1027 23:25:19.421827 1365166 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:25:19.421833 1365166 fix.go:55] fixHost starting: 
	I1027 23:25:19.422095 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:19.450288 1365166 fix.go:113] recreateIfNeeded on no-preload-947754: state=Stopped err=<nil>
	W1027 23:25:19.450317 1365166 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:25:19.035395 1362600 out.go:252]   - Generating certificates and keys ...
	I1027 23:25:19.035494 1362600 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:25:19.035562 1362600 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:25:19.577879 1362600 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:25:20.130073 1362600 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:25:20.355446 1362600 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:25:21.085475 1362600 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:25:21.119415 1362600 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:25:21.119762 1362600 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-790322 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:25:21.408519 1362600 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:25:21.408860 1362600 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-790322 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:25:21.881346 1362600 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:25:22.842139 1362600 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:25:19.453601 1365166 out.go:252] * Restarting existing docker container for "no-preload-947754" ...
	I1027 23:25:19.453682 1365166 cli_runner.go:164] Run: docker start no-preload-947754
	I1027 23:25:19.810128 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:19.847363 1365166 kic.go:430] container "no-preload-947754" state is running.
	I1027 23:25:19.847738 1365166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:25:19.880616 1365166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/config.json ...
	I1027 23:25:19.880862 1365166 machine.go:94] provisionDockerMachine start ...
	I1027 23:25:19.880931 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:19.921553 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:19.921870 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:19.921879 1365166 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:25:19.922520 1365166 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38170->127.0.0.1:34579: read: connection reset by peer
	I1027 23:25:23.082926 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:25:23.083009 1365166 ubuntu.go:182] provisioning hostname "no-preload-947754"
	I1027 23:25:23.083099 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:23.106135 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:23.106546 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:23.106563 1365166 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-947754 && echo "no-preload-947754" | sudo tee /etc/hostname
	I1027 23:25:23.289478 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:25:23.289591 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:23.317855 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:23.318196 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:23.318215 1365166 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-947754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-947754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-947754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:25:23.491117 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:25:23.491145 1365166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:25:23.491164 1365166 ubuntu.go:190] setting up certificates
	I1027 23:25:23.491175 1365166 provision.go:84] configureAuth start
	I1027 23:25:23.491237 1365166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:25:23.513481 1365166 provision.go:143] copyHostCerts
	I1027 23:25:23.513552 1365166 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:25:23.513566 1365166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:25:23.513643 1365166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:25:23.513760 1365166 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:25:23.513771 1365166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:25:23.513799 1365166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:25:23.513870 1365166 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:25:23.513880 1365166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:25:23.513905 1365166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:25:23.513977 1365166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.no-preload-947754 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-947754]
	I1027 23:25:24.179516 1365166 provision.go:177] copyRemoteCerts
	I1027 23:25:24.179583 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:25:24.179640 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:24.198758 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:24.311331 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 23:25:24.332497 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:25:24.353201 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:25:24.373751 1365166 provision.go:87] duration metric: took 882.552025ms to configureAuth
	I1027 23:25:24.373828 1365166 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:25:24.374088 1365166 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:24.374241 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:24.398966 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:24.399274 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:24.399288 1365166 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:25:24.797023 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:25:24.797067 1365166 machine.go:97] duration metric: took 4.916196293s to provisionDockerMachine
	I1027 23:25:24.797079 1365166 start.go:293] postStartSetup for "no-preload-947754" (driver="docker")
	I1027 23:25:24.797093 1365166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:25:24.797156 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:25:24.797216 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:24.825617 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:24.950574 1365166 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:25:24.956484 1365166 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:25:24.956511 1365166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:25:24.956522 1365166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:25:24.956584 1365166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:25:24.956665 1365166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:25:24.956772 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:25:24.968815 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:25:24.995815 1365166 start.go:296] duration metric: took 198.717404ms for postStartSetup
	I1027 23:25:24.995935 1365166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:25:24.996007 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:25.037367 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:25.148325 1365166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:25:25.154177 1365166 fix.go:57] duration metric: took 5.732334657s for fixHost
	I1027 23:25:25.154204 1365166 start.go:83] releasing machines lock for "no-preload-947754", held for 5.732388557s
	I1027 23:25:25.154296 1365166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:25:25.185834 1365166 ssh_runner.go:195] Run: cat /version.json
	I1027 23:25:25.185897 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:25.186211 1365166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:25:25.186276 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:25.222930 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:25.234570 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:25.424887 1365166 ssh_runner.go:195] Run: systemctl --version
	I1027 23:25:25.432210 1365166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:25:25.475187 1365166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:25:25.480249 1365166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:25:25.480326 1365166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:25:25.489027 1365166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:25:25.489060 1365166 start.go:496] detecting cgroup driver to use...
	I1027 23:25:25.489090 1365166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:25:25.489161 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:25:25.505803 1365166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:25:25.520829 1365166 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:25:25.520911 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:25:25.537670 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:25:25.552199 1365166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:25:25.717136 1365166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:25:25.877758 1365166 docker.go:234] disabling docker service ...
	I1027 23:25:25.877831 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:25:25.900275 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:25:25.913992 1365166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:25:26.091152 1365166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:25:26.267117 1365166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:25:26.291175 1365166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:25:26.312211 1365166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:25:26.312286 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.325382 1365166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:25:26.325493 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.336751 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.347791 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.362519 1365166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:25:26.372118 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.385234 1365166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.394584 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.405294 1365166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:25:26.415387 1365166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:25:26.425280 1365166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:26.610859 1365166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:25:26.844157 1365166 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:25:26.844234 1365166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:25:26.848147 1365166 start.go:564] Will wait 60s for crictl version
	I1027 23:25:26.848223 1365166 ssh_runner.go:195] Run: which crictl
	I1027 23:25:26.855594 1365166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:25:26.907703 1365166 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:25:26.907810 1365166 ssh_runner.go:195] Run: crio --version
	I1027 23:25:26.956111 1365166 ssh_runner.go:195] Run: crio --version
	I1027 23:25:27.007261 1365166 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:25:23.930714 1362600 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:25:23.930794 1362600 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:25:24.414096 1362600 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:25:25.842978 1362600 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:25:26.257716 1362600 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:25:27.150763 1362600 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:25:27.673025 1362600 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:25:27.674165 1362600 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:25:27.684523 1362600 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:25:27.688656 1362600 out.go:252]   - Booting up control plane ...
	I1027 23:25:27.688765 1362600 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:25:27.693058 1362600 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:25:27.693142 1362600 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:25:27.715487 1362600 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:25:27.715822 1362600 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:25:27.728206 1362600 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:25:27.728539 1362600 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:25:27.728767 1362600 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:25:27.931241 1362600 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:25:27.931366 1362600 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:25:27.010365 1365166 cli_runner.go:164] Run: docker network inspect no-preload-947754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:25:27.036257 1365166 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 23:25:27.041576 1365166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:25:27.054237 1365166 kubeadm.go:884] updating cluster {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:25:27.054355 1365166 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:25:27.054519 1365166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:25:27.100009 1365166 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:25:27.100032 1365166 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:25:27.100040 1365166 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 23:25:27.100148 1365166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-947754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:25:27.100231 1365166 ssh_runner.go:195] Run: crio config
	I1027 23:25:27.175054 1365166 cni.go:84] Creating CNI manager for ""
	I1027 23:25:27.175123 1365166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:25:27.175174 1365166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:25:27.175223 1365166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-947754 NodeName:no-preload-947754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:25:27.175389 1365166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-947754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:25:27.175481 1365166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:25:27.183800 1365166 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:25:27.183913 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:25:27.191845 1365166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:25:27.225805 1365166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:25:27.244890 1365166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 23:25:27.262203 1365166 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:25:27.266931 1365166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:25:27.282733 1365166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:27.429222 1365166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:25:27.464538 1365166 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754 for IP: 192.168.76.2
	I1027 23:25:27.464561 1365166 certs.go:195] generating shared ca certs ...
	I1027 23:25:27.464586 1365166 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:27.464772 1365166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:25:27.464838 1365166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:25:27.464852 1365166 certs.go:257] generating profile certs ...
	I1027 23:25:27.464981 1365166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key
	I1027 23:25:27.465066 1365166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321
	I1027 23:25:27.465119 1365166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key
	I1027 23:25:27.465256 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:25:27.465308 1365166 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:25:27.465322 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:25:27.465367 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:25:27.465399 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:25:27.465450 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:25:27.465522 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:25:27.466362 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:25:27.499514 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:25:27.552644 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:25:27.590207 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:25:27.623785 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:25:27.671572 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 23:25:27.724545 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:25:27.782945 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:25:27.835374 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:25:27.859346 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:25:27.887182 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:25:27.938630 1365166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:25:27.955705 1365166 ssh_runner.go:195] Run: openssl version
	I1027 23:25:27.964465 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:25:27.974931 1365166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:25:27.979930 1365166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:25:27.980004 1365166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:25:28.036468 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:25:28.045021 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:25:28.054117 1365166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:25:28.059004 1365166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:25:28.059080 1365166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:25:28.101354 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:25:28.109981 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:25:28.118736 1365166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:25:28.124086 1365166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:25:28.124168 1365166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:25:28.180415 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:25:28.192284 1365166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:25:28.196528 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:25:28.280376 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:25:28.354946 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:25:28.443567 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:25:28.515350 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:25:28.573565 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:25:28.665296 1365166 kubeadm.go:401] StartCluster: {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:25:28.665404 1365166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:25:28.665521 1365166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:25:28.736721 1365166 cri.go:89] found id: "9f23df14f2981858d26fa46d7024756723417501e064c150efed848207a12d0c"
	I1027 23:25:28.736749 1365166 cri.go:89] found id: "8d31e22ed9a43d906de78edcbe062d2a70163bf79ab57e9dd6ef2531387faeea"
	I1027 23:25:28.736756 1365166 cri.go:89] found id: "cf6586816133757006922d7552cfb82bf56a3f786053d6ff45e949dbf3a4d391"
	I1027 23:25:28.736761 1365166 cri.go:89] found id: "753952329c8042b52b9f0e7089396f8c95422ec863eda044f175ca5860a37dda"
	I1027 23:25:28.736767 1365166 cri.go:89] found id: ""
	I1027 23:25:28.736853 1365166 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:25:28.764054 1365166 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:25:28Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:25:28.764149 1365166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:25:28.787070 1365166 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:25:28.787094 1365166 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:25:28.787164 1365166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:25:28.823885 1365166 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:25:28.824390 1365166 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-947754" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:28.824533 1365166 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-947754" cluster setting kubeconfig missing "no-preload-947754" context setting]
	I1027 23:25:28.824856 1365166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:28.826537 1365166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:25:28.868417 1365166 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 23:25:28.868462 1365166 kubeadm.go:602] duration metric: took 81.361246ms to restartPrimaryControlPlane
	I1027 23:25:28.868472 1365166 kubeadm.go:403] duration metric: took 203.187948ms to StartCluster
	I1027 23:25:28.868487 1365166 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:28.868556 1365166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:28.869240 1365166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:28.869474 1365166 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:25:28.869887 1365166 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:28.869870 1365166 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:25:28.870012 1365166 addons.go:69] Setting storage-provisioner=true in profile "no-preload-947754"
	I1027 23:25:28.870030 1365166 addons.go:238] Setting addon storage-provisioner=true in "no-preload-947754"
	W1027 23:25:28.870037 1365166 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:25:28.870060 1365166 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:25:28.870532 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.870731 1365166 addons.go:69] Setting dashboard=true in profile "no-preload-947754"
	I1027 23:25:28.870773 1365166 addons.go:238] Setting addon dashboard=true in "no-preload-947754"
	W1027 23:25:28.870799 1365166 addons.go:247] addon dashboard should already be in state true
	I1027 23:25:28.870836 1365166 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:25:28.871099 1365166 addons.go:69] Setting default-storageclass=true in profile "no-preload-947754"
	I1027 23:25:28.871126 1365166 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-947754"
	I1027 23:25:28.871335 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.871440 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.875082 1365166 out.go:179] * Verifying Kubernetes components...
	I1027 23:25:28.878279 1365166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:28.930202 1365166 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:25:28.934205 1365166 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:25:28.937159 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:25:28.937184 1365166 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:25:28.937166 1365166 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:25:28.937252 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:28.941601 1365166 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:28.941626 1365166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:25:28.941691 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:28.943937 1365166 addons.go:238] Setting addon default-storageclass=true in "no-preload-947754"
	W1027 23:25:28.943957 1365166 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:25:28.943982 1365166 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:25:28.944403 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.995021 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:28.997454 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:29.004154 1365166 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:29.004180 1365166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:25:29.004252 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:29.035994 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:30.430727 1362600 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.500895657s
	I1027 23:25:30.432800 1362600 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:25:30.433154 1362600 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 23:25:30.433473 1362600 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:25:30.433771 1362600 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:25:29.384617 1365166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:25:29.440000 1365166 node_ready.go:35] waiting up to 6m0s for node "no-preload-947754" to be "Ready" ...
	I1027 23:25:29.451873 1365166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:29.512350 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:25:29.512379 1365166 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:25:29.572037 1365166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:29.621197 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:25:29.621224 1365166 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:25:29.714394 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:25:29.714418 1365166 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:25:29.816465 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:25:29.816493 1365166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:25:29.894572 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:25:29.894599 1365166 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:25:29.966665 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:25:29.966692 1365166 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:25:30.041450 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:25:30.041491 1365166 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:25:30.085222 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:25:30.085253 1365166 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:25:30.121719 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:25:30.121749 1365166 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:25:30.153298 1365166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:25:36.933475 1362600 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.499427779s
	I1027 23:25:36.210900 1365166 node_ready.go:49] node "no-preload-947754" is "Ready"
	I1027 23:25:36.210975 1365166 node_ready.go:38] duration metric: took 6.770941657s for node "no-preload-947754" to be "Ready" ...
	I1027 23:25:36.211004 1365166 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:25:36.211099 1365166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:25:39.711181 1365166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.259272583s)
	I1027 23:25:39.711284 1365166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.139220846s)
	I1027 23:25:39.711403 1365166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.558072475s)
	I1027 23:25:39.711442 1365166 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.500308546s)
	I1027 23:25:39.711712 1365166 api_server.go:72] duration metric: took 10.842208237s to wait for apiserver process to appear ...
	I1027 23:25:39.711740 1365166 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:25:39.711783 1365166 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:25:39.714565 1365166 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-947754 addons enable metrics-server
	
	I1027 23:25:39.734223 1365166 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 23:25:39.735470 1365166 api_server.go:141] control plane version: v1.34.1
	I1027 23:25:39.735540 1365166 api_server.go:131] duration metric: took 23.777271ms to wait for apiserver health ...
	I1027 23:25:39.735564 1365166 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:25:39.753119 1365166 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:25:39.888980 1362600 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.45479343s
	I1027 23:25:41.939270 1362600 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.505678373s
	I1027 23:25:41.965830 1362600 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:25:41.994710 1362600 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:25:42.041614 1362600 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:25:42.042154 1362600 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-790322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:25:42.072977 1362600 kubeadm.go:319] [bootstrap-token] Using token: 2pihna.mdcf9qb8cpwz02aw
	I1027 23:25:42.077949 1362600 out.go:252]   - Configuring RBAC rules ...
	I1027 23:25:42.078110 1362600 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:25:42.105071 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:25:42.137510 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:25:42.172857 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:25:42.191241 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:25:42.205040 1362600 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:25:42.360619 1362600 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:25:42.799152 1362600 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:25:43.352499 1362600 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:25:43.354205 1362600 kubeadm.go:319] 
	I1027 23:25:43.354284 1362600 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:25:43.354297 1362600 kubeadm.go:319] 
	I1027 23:25:43.354423 1362600 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:25:43.354431 1362600 kubeadm.go:319] 
	I1027 23:25:43.354457 1362600 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:25:43.357714 1362600 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:25:43.357782 1362600 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:25:43.357787 1362600 kubeadm.go:319] 
	I1027 23:25:43.357844 1362600 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:25:43.357849 1362600 kubeadm.go:319] 
	I1027 23:25:43.357919 1362600 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:25:43.357933 1362600 kubeadm.go:319] 
	I1027 23:25:43.357989 1362600 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:25:43.358067 1362600 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:25:43.358138 1362600 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:25:43.358142 1362600 kubeadm.go:319] 
	I1027 23:25:43.358519 1362600 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:25:43.358615 1362600 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:25:43.358621 1362600 kubeadm.go:319] 
	I1027 23:25:43.358941 1362600 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2pihna.mdcf9qb8cpwz02aw \
	I1027 23:25:43.359055 1362600 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:25:43.359270 1362600 kubeadm.go:319] 	--control-plane 
	I1027 23:25:43.359280 1362600 kubeadm.go:319] 
	I1027 23:25:43.359567 1362600 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:25:43.359577 1362600 kubeadm.go:319] 
	I1027 23:25:43.359871 1362600 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2pihna.mdcf9qb8cpwz02aw \
	I1027 23:25:43.360163 1362600 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:25:43.374364 1362600 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:25:43.374839 1362600 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:25:43.374971 1362600 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:25:43.374982 1362600 cni.go:84] Creating CNI manager for ""
	I1027 23:25:43.374990 1362600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:25:43.378597 1362600 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 23:25:39.754229 1365166 system_pods.go:59] 8 kube-system pods found
	I1027 23:25:39.754271 1365166 system_pods.go:61] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:25:39.754278 1365166 system_pods.go:61] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:25:39.754284 1365166 system_pods.go:61] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:25:39.754291 1365166 system_pods.go:61] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:25:39.754301 1365166 system_pods.go:61] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:25:39.754312 1365166 system_pods.go:61] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:25:39.754320 1365166 system_pods.go:61] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:25:39.754325 1365166 system_pods.go:61] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Running
	I1027 23:25:39.754338 1365166 system_pods.go:74] duration metric: took 18.754865ms to wait for pod list to return data ...
	I1027 23:25:39.754346 1365166 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:25:39.756002 1365166 addons.go:514] duration metric: took 10.88612916s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:25:39.760139 1365166 default_sa.go:45] found service account: "default"
	I1027 23:25:39.760219 1365166 default_sa.go:55] duration metric: took 5.841838ms for default service account to be created ...
	I1027 23:25:39.760244 1365166 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:25:39.776714 1365166 system_pods.go:86] 8 kube-system pods found
	I1027 23:25:39.776795 1365166 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:25:39.776817 1365166 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:25:39.776841 1365166 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:25:39.776877 1365166 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:25:39.776910 1365166 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:25:39.776930 1365166 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:25:39.776951 1365166 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:25:39.776980 1365166 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Running
	I1027 23:25:39.777007 1365166 system_pods.go:126] duration metric: took 16.745122ms to wait for k8s-apps to be running ...
	I1027 23:25:39.777031 1365166 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:25:39.777115 1365166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:25:39.815077 1365166 system_svc.go:56] duration metric: took 38.037566ms WaitForService to wait for kubelet
	I1027 23:25:39.815161 1365166 kubeadm.go:587] duration metric: took 10.945656982s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:25:39.815195 1365166 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:25:39.829259 1365166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:25:39.829334 1365166 node_conditions.go:123] node cpu capacity is 2
	I1027 23:25:39.829362 1365166 node_conditions.go:105] duration metric: took 14.145857ms to run NodePressure ...
	I1027 23:25:39.829388 1365166 start.go:242] waiting for startup goroutines ...
	I1027 23:25:39.829422 1365166 start.go:247] waiting for cluster config update ...
	I1027 23:25:39.829455 1365166 start.go:256] writing updated cluster config ...
	I1027 23:25:39.829801 1365166 ssh_runner.go:195] Run: rm -f paused
	I1027 23:25:39.848349 1365166 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:25:39.862712 1365166 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:25:41.904658 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	I1027 23:25:43.382540 1362600 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:25:43.386833 1362600 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:25:43.386852 1362600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:25:43.415217 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:25:43.764171 1362600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:25:43.764268 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:43.764305 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-790322 minikube.k8s.io/updated_at=2025_10_27T23_25_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=embed-certs-790322 minikube.k8s.io/primary=true
	I1027 23:25:43.973241 1362600 ops.go:34] apiserver oom_adj: -16
	I1027 23:25:43.973357 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:44.473727 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:44.973993 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:45.474445 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:45.974361 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:46.474291 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:46.973678 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:47.474118 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:47.973650 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:48.289417 1362600 kubeadm.go:1114] duration metric: took 4.525207547s to wait for elevateKubeSystemPrivileges
	I1027 23:25:48.289443 1362600 kubeadm.go:403] duration metric: took 29.722300277s to StartCluster
	I1027 23:25:48.289460 1362600 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:48.289522 1362600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:48.290968 1362600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:48.291180 1362600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:25:48.291315 1362600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:25:48.291588 1362600 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:48.291798 1362600 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:25:48.291897 1362600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-790322"
	I1027 23:25:48.291930 1362600 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-790322"
	I1027 23:25:48.291984 1362600 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:25:48.292593 1362600 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:25:48.292016 1362600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-790322"
	I1027 23:25:48.293016 1362600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790322"
	I1027 23:25:48.293402 1362600 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:25:48.296665 1362600 out.go:179] * Verifying Kubernetes components...
	I1027 23:25:48.306599 1362600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:48.332706 1362600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1027 23:25:44.370678 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:46.869171 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	I1027 23:25:48.336024 1362600 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:48.336048 1362600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:25:48.336110 1362600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:25:48.337508 1362600 addons.go:238] Setting addon default-storageclass=true in "embed-certs-790322"
	I1027 23:25:48.337543 1362600 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:25:48.338003 1362600 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:25:48.378996 1362600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34574 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:25:48.386549 1362600 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:48.386572 1362600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:25:48.386639 1362600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:25:48.413428 1362600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34574 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:25:49.059236 1362600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:25:49.059361 1362600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:25:49.064120 1362600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:49.107300 1362600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:25:49.177604 1362600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:50.189503 1362600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.130035001s)
	I1027 23:25:50.189607 1362600 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 23:25:50.524920 1362600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.34720525s)
	I1027 23:25:50.525244 1362600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.461093501s)
	I1027 23:25:50.554486 1362600 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:25:50.557377 1362600 addons.go:514] duration metric: took 2.265561543s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:25:50.694351 1362600 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-790322" context rescaled to 1 replicas
	W1027 23:25:51.110875 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:49.369482 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:51.372755 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:53.869333 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:53.610615 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:56.110990 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:58.111435 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:56.368271 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:58.868809 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:00.128724 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:02.611227 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:01.368672 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:03.868748 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:05.110211 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:07.111151 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:06.368257 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:08.378787 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:09.611257 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:12.110228 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:10.868218 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	I1027 23:26:12.368788 1365166 pod_ready.go:94] pod "coredns-66bc5c9577-mzm5d" is "Ready"
	I1027 23:26:12.368816 1365166 pod_ready.go:86] duration metric: took 32.506033244s for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.371523 1365166 pod_ready.go:83] waiting for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.375993 1365166 pod_ready.go:94] pod "etcd-no-preload-947754" is "Ready"
	I1027 23:26:12.376032 1365166 pod_ready.go:86] duration metric: took 4.479965ms for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.378299 1365166 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.383341 1365166 pod_ready.go:94] pod "kube-apiserver-no-preload-947754" is "Ready"
	I1027 23:26:12.383369 1365166 pod_ready.go:86] duration metric: took 5.041031ms for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.385854 1365166 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.566652 1365166 pod_ready.go:94] pod "kube-controller-manager-no-preload-947754" is "Ready"
	I1027 23:26:12.566677 1365166 pod_ready.go:86] duration metric: took 180.759058ms for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.766683 1365166 pod_ready.go:83] waiting for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.167013 1365166 pod_ready.go:94] pod "kube-proxy-29878" is "Ready"
	I1027 23:26:13.167043 1365166 pod_ready.go:86] duration metric: took 400.333252ms for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.367335 1365166 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.766796 1365166 pod_ready.go:94] pod "kube-scheduler-no-preload-947754" is "Ready"
	I1027 23:26:13.766830 1365166 pod_ready.go:86] duration metric: took 399.467238ms for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.766844 1365166 pod_ready.go:40] duration metric: took 33.918408882s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:26:13.824702 1365166 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:26:13.830035 1365166 out.go:179] * Done! kubectl is now configured to use "no-preload-947754" cluster and "default" namespace by default
	W1027 23:26:14.111076 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:16.611293 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:19.110066 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:21.110786 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:23.110935 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.818351148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.834491339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.835423487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.851407597Z" level=info msg="Created container 95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx/dashboard-metrics-scraper" id=f1e0d07c-24c5-45e3-a883-c8cfccd364b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.852669605Z" level=info msg="Starting container: 95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce" id=c18e9eb4-c34a-4798-b433-3d2a70b6dd52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.856174105Z" level=info msg="Started container" PID=1658 containerID=95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx/dashboard-metrics-scraper id=c18e9eb4-c34a-4798-b433-3d2a70b6dd52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9
	Oct 27 23:26:14 no-preload-947754 conmon[1656]: conmon 95d9328dd9ac768fcd96 <ninfo>: container 1658 exited with status 1
	Oct 27 23:26:15 no-preload-947754 crio[650]: time="2025-10-27T23:26:15.145871575Z" level=info msg="Removing container: c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084" id=76bd515a-6284-4bee-9f58-9eb92422bb4e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:26:15 no-preload-947754 crio[650]: time="2025-10-27T23:26:15.157143414Z" level=info msg="Error loading conmon cgroup of container c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084: cgroup deleted" id=76bd515a-6284-4bee-9f58-9eb92422bb4e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:26:15 no-preload-947754 crio[650]: time="2025-10-27T23:26:15.161688266Z" level=info msg="Removed container c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx/dashboard-metrics-scraper" id=76bd515a-6284-4bee-9f58-9eb92422bb4e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.452090951Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.458883891Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.458924909Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.458951429Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.462085963Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.462121123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.462155445Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.467104028Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.467141296Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.467168307Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.478053778Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.478091842Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.478117524Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.482062244Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.482104928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	95d9328dd9ac7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   2                   9f3a0f88ad441       dashboard-metrics-scraper-6ffb444bf9-ls2dx   kubernetes-dashboard
	a9afcfa94ebd1       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago       Running             storage-provisioner         2                   5cbea8e666633       storage-provisioner                          kube-system
	d820306abf607       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   6169a4d9afc1b       kubernetes-dashboard-855c9754f9-zxvvw        kubernetes-dashboard
	dce502a098734       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   4c3268cc79490       coredns-66bc5c9577-mzm5d                     kube-system
	1d5289ac78c72       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago       Running             busybox                     1                   6f61bfdd93ec6       busybox                                      default
	411070ec7a49e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago       Exited              storage-provisioner         1                   5cbea8e666633       storage-provisioner                          kube-system
	72419b65a3b57       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   4a9045afbc941       kindnet-m7l4b                                kube-system
	f06617fb88cc0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   154a97e76c812       kube-proxy-29878                             kube-system
	9f23df14f2981       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f0634a0001467       kube-apiserver-no-preload-947754             kube-system
	8d31e22ed9a43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   476d6455852bf       kube-controller-manager-no-preload-947754    kube-system
	cf65868161337       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   285e1a0f4ccd0       etcd-no-preload-947754                       kube-system
	753952329c804       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   51180ffafaf96       kube-scheduler-no-preload-947754             kube-system
	
	
	==> coredns [dce502a0987347d98c1fadd581f5383d9c39aebc92f303d3c2f85a014ca708fd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56369 - 43646 "HINFO IN 5642184654014402772.6034745111912342011. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01432634s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-947754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-947754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=no-preload-947754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:24:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-947754
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:26:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-947754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c8ec03af-833c-45dd-b53c-bcc66992da89
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-mzm5d                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-no-preload-947754                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-m7l4b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-947754              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-947754     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-29878                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-947754              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ls2dx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zxvvw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 110s                 kube-proxy       
	  Normal   Starting                 47s                  kube-proxy       
	  Normal   Starting                 2m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s (x7 over 2m8s)  kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x7 over 2m8s)  kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-947754 event: Registered Node no-preload-947754 in Controller
	  Normal   NodeReady                97s                  kubelet          Node no-preload-947754 status is now: NodeReady
	  Normal   Starting                 61s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                  node-controller  Node no-preload-947754 event: Registered Node no-preload-947754 in Controller
	
	
	==> dmesg <==
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cf6586816133757006922d7552cfb82bf56a3f786053d6ff45e949dbf3a4d391] <==
	{"level":"warn","ts":"2025-10-27T23:25:32.716351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.837752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.884633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.932339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.980838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.006511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.037601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.072351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.108499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.145759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.192960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.227135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.270432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.312007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.345238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.394663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.448136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.519764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.602507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.684234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.786579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.819244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.874299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.962000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:34.182697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43094","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:26:28 up  6:08,  0 user,  load average: 4.59, 4.15, 3.37
	Linux no-preload-947754 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72419b65a3b57a571d664d92c78cb819499e775deac68bc21b2c1056c29b67bc] <==
	I1027 23:25:38.157982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:25:38.162663       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:25:38.164422       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:25:38.164493       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:25:38.164531       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:25:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:25:38.451373       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:25:38.466002       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:25:38.466069       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:25:38.467006       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:26:08.452248       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:26:08.466849       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:26:08.466876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:26:08.466986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 23:26:10.167241       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:26:10.167275       1 metrics.go:72] Registering metrics
	I1027 23:26:10.167347       1 controller.go:711] "Syncing nftables rules"
	I1027 23:26:18.451733       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:26:18.451800       1 main.go:301] handling current node
	I1027 23:26:28.451675       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:26:28.451737       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f23df14f2981858d26fa46d7024756723417501e064c150efed848207a12d0c] <==
	I1027 23:25:36.381515       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:25:36.393027       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:25:36.393130       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 23:25:36.393168       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:25:36.394685       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 23:25:36.394707       1 policy_source.go:240] refreshing policies
	I1027 23:25:36.395648       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:25:36.395937       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:25:36.419146       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:25:36.419185       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:25:36.428968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 23:25:36.452059       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:25:36.483172       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:25:36.599183       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:25:36.872379       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:25:38.572198       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:25:38.818146       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:25:39.094205       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:25:39.170895       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:25:39.564954       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.229.20"}
	I1027 23:25:39.603424       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.238.198"}
	W1027 23:25:39.616639       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 23:25:39.618246       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:25:41.368678       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:25:41.676857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8d31e22ed9a43d906de78edcbe062d2a70163bf79ab57e9dd6ef2531387faeea] <==
	I1027 23:25:41.215313       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:25:41.215750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 23:25:41.220414       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:25:41.221643       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 23:25:41.225417       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 23:25:41.227629       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:25:41.236634       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:25:41.238530       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:25:41.247348       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 23:25:41.253737       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:25:41.253832       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:25:41.253958       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:25:41.254083       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-947754"
	I1027 23:25:41.254161       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 23:25:41.254795       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:25:41.254906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:25:41.254920       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:25:41.256194       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:25:41.256637       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 23:25:41.261561       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:25:41.262297       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:25:41.267491       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:25:41.291008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:25:41.291126       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:25:41.291157       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [f06617fb88cc02987c92472c35f87309338616d5e8dbb92304621d4132735bbb] <==
	I1027 23:25:40.194705       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:25:40.554447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:25:40.663782       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:25:40.663826       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:25:40.663921       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:25:41.228538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:25:41.231706       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:25:41.268592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:25:41.269232       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:25:41.270132       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:25:41.286926       1 config.go:200] "Starting service config controller"
	I1027 23:25:41.287548       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:25:41.287632       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:25:41.287679       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:25:41.287746       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:25:41.287775       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:25:41.292912       1 config.go:309] "Starting node config controller"
	I1027 23:25:41.293596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:25:41.293632       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:25:41.391533       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:25:41.391628       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:25:41.391654       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [753952329c8042b52b9f0e7089396f8c95422ec863eda044f175ca5860a37dda] <==
	I1027 23:25:38.470733       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:25:42.925931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:25:42.925968       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:25:42.935316       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:25:42.935417       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:25:42.935445       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:25:42.935488       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:25:42.974031       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:25:42.974064       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:25:42.974120       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:25:42.974128       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:25:43.036401       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:25:43.074473       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:25:43.074537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168532     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9jp\" (UniqueName: \"kubernetes.io/projected/fbebe4c7-b069-41ce-a789-cdbad9d17eb5-kube-api-access-6l9jp\") pod \"dashboard-metrics-scraper-6ffb444bf9-ls2dx\" (UID: \"fbebe4c7-b069-41ce-a789-cdbad9d17eb5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168594     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96tj9\" (UniqueName: \"kubernetes.io/projected/4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31-kube-api-access-96tj9\") pod \"kubernetes-dashboard-855c9754f9-zxvvw\" (UID: \"4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zxvvw"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168623     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fbebe4c7-b069-41ce-a789-cdbad9d17eb5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ls2dx\" (UID: \"fbebe4c7-b069-41ce-a789-cdbad9d17eb5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168675     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zxvvw\" (UID: \"4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zxvvw"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: W1027 23:25:43.423401     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/crio-6169a4d9afc1b400f1f202e0441af71dc32d112e082ce9b2fefc2bf232e6098a WatchSource:0}: Error finding container 6169a4d9afc1b400f1f202e0441af71dc32d112e082ce9b2fefc2bf232e6098a: Status 404 returned error can't find the container with id 6169a4d9afc1b400f1f202e0441af71dc32d112e082ce9b2fefc2bf232e6098a
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: W1027 23:25:43.454551     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/crio-9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9 WatchSource:0}: Error finding container 9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9: Status 404 returned error can't find the container with id 9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9
	Oct 27 23:25:51 no-preload-947754 kubelet[767]: I1027 23:25:51.083604     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zxvvw" podStartSLOduration=3.193574978 podStartE2EDuration="10.083587142s" podCreationTimestamp="2025-10-27 23:25:41 +0000 UTC" firstStartedPulling="2025-10-27 23:25:43.4294614 +0000 UTC m=+15.974640224" lastFinishedPulling="2025-10-27 23:25:50.319473564 +0000 UTC m=+22.864652388" observedRunningTime="2025-10-27 23:25:51.081133479 +0000 UTC m=+23.626312303" watchObservedRunningTime="2025-10-27 23:25:51.083587142 +0000 UTC m=+23.628765974"
	Oct 27 23:25:56 no-preload-947754 kubelet[767]: I1027 23:25:56.082366     767 scope.go:117] "RemoveContainer" containerID="779e4d613c1da5c39da3dd9d90eb8a837ca3e84a99b61a1c7c08228a6c454e0d"
	Oct 27 23:25:57 no-preload-947754 kubelet[767]: I1027 23:25:57.086144     767 scope.go:117] "RemoveContainer" containerID="779e4d613c1da5c39da3dd9d90eb8a837ca3e84a99b61a1c7c08228a6c454e0d"
	Oct 27 23:25:57 no-preload-947754 kubelet[767]: I1027 23:25:57.086744     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:25:57 no-preload-947754 kubelet[767]: E1027 23:25:57.086922     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:25:58 no-preload-947754 kubelet[767]: I1027 23:25:58.090173     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:25:58 no-preload-947754 kubelet[767]: E1027 23:25:58.090336     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:03 no-preload-947754 kubelet[767]: I1027 23:26:03.371082     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:26:03 no-preload-947754 kubelet[767]: E1027 23:26:03.371276     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:09 no-preload-947754 kubelet[767]: I1027 23:26:09.124574     767 scope.go:117] "RemoveContainer" containerID="411070ec7a49e4f7f558d049d91a93e52b7f68d46532edcf9784b3a28da65fe6"
	Oct 27 23:26:14 no-preload-947754 kubelet[767]: I1027 23:26:14.814967     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:26:15 no-preload-947754 kubelet[767]: I1027 23:26:15.143518     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:26:15 no-preload-947754 kubelet[767]: I1027 23:26:15.143815     767 scope.go:117] "RemoveContainer" containerID="95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce"
	Oct 27 23:26:15 no-preload-947754 kubelet[767]: E1027 23:26:15.143986     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:23 no-preload-947754 kubelet[767]: I1027 23:26:23.370436     767 scope.go:117] "RemoveContainer" containerID="95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce"
	Oct 27 23:26:23 no-preload-947754 kubelet[767]: E1027 23:26:23.370623     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:26 no-preload-947754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:26:26 no-preload-947754 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:26:26 no-preload-947754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d820306abf607ac55bcab84f8735d57b9b838b6f2dcd5d7b45c692707223d95a] <==
	2025/10/27 23:25:50 Using namespace: kubernetes-dashboard
	2025/10/27 23:25:50 Using in-cluster config to connect to apiserver
	2025/10/27 23:25:50 Using secret token for csrf signing
	2025/10/27 23:25:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:25:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:25:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 23:25:50 Generating JWE encryption key
	2025/10/27 23:25:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:25:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:25:51 Initializing JWE encryption key from synchronized object
	2025/10/27 23:25:51 Creating in-cluster Sidecar client
	2025/10/27 23:25:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:25:51 Serving insecurely on HTTP port: 9090
	2025/10/27 23:26:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:25:50 Starting overwatch
	
	
	==> storage-provisioner [411070ec7a49e4f7f558d049d91a93e52b7f68d46532edcf9784b3a28da65fe6] <==
	I1027 23:25:38.797275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:26:08.807592       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a9afcfa94ebd1357f2da7111c52cf9032a26396ad5338a0fbec038de3eb2dfd0] <==
	I1027 23:26:09.177533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:26:09.196723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:26:09.196925       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:26:09.200442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:12.656784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:16.917647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:20.516147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:23.570108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:26.592158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:26.597579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:26.597741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:26:26.597902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-947754_74ed6230-2ff8-4940-bc04-93941c6437a3!
	W1027 23:26:26.602936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:26.604096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77faea05-b4f8-4145-b717-91f936278f59", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-947754_74ed6230-2ff8-4940-bc04-93941c6437a3 became leader
	W1027 23:26:26.622622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:26.698539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-947754_74ed6230-2ff8-4940-bc04-93941c6437a3!
	W1027 23:26:28.626340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:28.633408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-947754 -n no-preload-947754
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-947754 -n no-preload-947754: exit status 2 (500.853343ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-947754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-947754
helpers_test.go:243: (dbg) docker inspect no-preload-947754:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd",
	        "Created": "2025-10-27T23:23:41.900111117Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1365304,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:25:19.512967942Z",
	            "FinishedAt": "2025-10-27T23:25:18.463045545Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/hosts",
	        "LogPath": "/var/lib/docker/containers/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd-json.log",
	        "Name": "/no-preload-947754",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-947754:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-947754",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd",
	                "LowerDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c5ee39391503335b6c35014a89cbd6eea86fe3f643e367e6da44c26ee368544/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-947754",
	                "Source": "/var/lib/docker/volumes/no-preload-947754/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-947754",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-947754",
	                "name.minikube.sigs.k8s.io": "no-preload-947754",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29095ec715bd63aecaca87e1396283a0978bf22fd537dfb7541c3cebdeeca4c6",
	            "SandboxKey": "/var/run/docker/netns/29095ec715bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34579"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34580"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34583"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34581"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34582"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-947754": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:bf:18:ab:74:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0cbf6a9d973fe230cfa5a9e9384a72057cae1f71fd4d9191f2ef370fd36289f9",
	                    "EndpointID": "cda58594b629da7ad9391f41f1b6a4a11cf577c22c152a9f7c43e3064953a874",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-947754",
	                        "c73891b58ca0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754: exit status 2 (393.814437ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-947754 logs -n 25
E1027 23:26:30.834274 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-947754 logs -n 25: (1.319567182s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-440075 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-440075 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo containerd config dump                                                                                                                                                                                                  │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ stop    │ -p old-k8s-version-477179 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crio config                                                                                                                                                                                                             │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ delete  │ -p bridge-440075                                                                                                                                                                                                                              │ bridge-440075          │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:25 UTC │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179 │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322     │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ stop    │ -p no-preload-947754 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754      │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:25:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:25:19.111291 1365166 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:25:19.111468 1365166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:25:19.111474 1365166 out.go:374] Setting ErrFile to fd 2...
	I1027 23:25:19.111480 1365166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:25:19.111742 1365166 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:25:19.112131 1365166 out.go:368] Setting JSON to false
	I1027 23:25:19.113032 1365166 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22068,"bootTime":1761585451,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:25:19.113122 1365166 start.go:143] virtualization:  
	I1027 23:25:19.116427 1365166 out.go:179] * [no-preload-947754] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:25:19.120355 1365166 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:25:19.120415 1365166 notify.go:221] Checking for updates...
	I1027 23:25:19.126141 1365166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:25:19.129084 1365166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:19.132145 1365166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:25:19.135156 1365166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:25:19.138679 1365166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:25:19.142000 1365166 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:19.142724 1365166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:25:19.183684 1365166 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:25:19.183794 1365166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:25:19.279983 1365166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:25:19.26766076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:25:19.280085 1365166 docker.go:318] overlay module found
	I1027 23:25:19.283260 1365166 out.go:179] * Using the docker driver based on existing profile
	I1027 23:25:19.286113 1365166 start.go:307] selected driver: docker
	I1027 23:25:19.286128 1365166 start.go:928] validating driver "docker" against &{Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:25:19.286238 1365166 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:25:19.287012 1365166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:25:19.387857 1365166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:25:19.375479233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:25:19.388209 1365166 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:25:19.388239 1365166 cni.go:84] Creating CNI manager for ""
	I1027 23:25:19.388298 1365166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:25:19.388345 1365166 start.go:351] cluster config:
	{Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:25:19.391534 1365166 out.go:179] * Starting "no-preload-947754" primary control-plane node in "no-preload-947754" cluster
	I1027 23:25:19.394308 1365166 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:25:19.397200 1365166 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:25:19.399834 1365166 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:25:19.399981 1365166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/config.json ...
	I1027 23:25:19.400314 1365166 cache.go:107] acquiring lock: {Name:mk1ee9dccf1fed0178bd5f318222a7ec38ae5783 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400392 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 23:25:19.400400 1365166 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.36µs
	I1027 23:25:19.400409 1365166 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 23:25:19.400421 1365166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:25:19.400627 1365166 cache.go:107] acquiring lock: {Name:mk71a4000b532d01990b206adaacbbe8b112aa34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400693 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 23:25:19.400702 1365166 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 79.763µs
	I1027 23:25:19.400709 1365166 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 23:25:19.400720 1365166 cache.go:107] acquiring lock: {Name:mk4be064d6d5271b09b25f994d534ea81d3dccd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400751 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 23:25:19.400756 1365166 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 37.202µs
	I1027 23:25:19.400762 1365166 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 23:25:19.400771 1365166 cache.go:107] acquiring lock: {Name:mka01faf9e1a67b26d1b66a062e4766564c5b49c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400796 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 23:25:19.400801 1365166 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 31.016µs
	I1027 23:25:19.400807 1365166 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 23:25:19.400816 1365166 cache.go:107] acquiring lock: {Name:mk4e70e86d91db286d3cdb14f85d915e029eb8d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400848 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 23:25:19.400853 1365166 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 38.672µs
	I1027 23:25:19.400859 1365166 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 23:25:19.400869 1365166 cache.go:107] acquiring lock: {Name:mke902fc6f90dc0050e0797caa43a275e42251d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400901 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 23:25:19.400906 1365166 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 37.843µs
	I1027 23:25:19.400911 1365166 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 23:25:19.400920 1365166 cache.go:107] acquiring lock: {Name:mk5fc1deed394b3a8d8e81fea34381b67cb3ab43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.400948 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1027 23:25:19.400954 1365166 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.603µs
	I1027 23:25:19.400959 1365166 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 23:25:19.400968 1365166 cache.go:107] acquiring lock: {Name:mk2206d14b7d0df15fb0480fd42557fcc1e0691c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.401017 1365166 cache.go:115] /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 23:25:19.401023 1365166 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 55.607µs
	I1027 23:25:19.401029 1365166 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 23:25:19.401040 1365166 cache.go:87] Successfully saved all images to host disk.
	I1027 23:25:19.421698 1365166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:25:19.421717 1365166 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:25:19.421730 1365166 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:25:19.421758 1365166 start.go:360] acquireMachinesLock for no-preload-947754: {Name:mka89090453d09b34a498048eab7a34ab59dc927 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:25:19.421808 1365166 start.go:364] duration metric: took 35.677µs to acquireMachinesLock for "no-preload-947754"
	I1027 23:25:19.421827 1365166 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:25:19.421833 1365166 fix.go:55] fixHost starting: 
	I1027 23:25:19.422095 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:19.450288 1365166 fix.go:113] recreateIfNeeded on no-preload-947754: state=Stopped err=<nil>
	W1027 23:25:19.450317 1365166 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:25:19.035395 1362600 out.go:252]   - Generating certificates and keys ...
	I1027 23:25:19.035494 1362600 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:25:19.035562 1362600 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:25:19.577879 1362600 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:25:20.130073 1362600 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:25:20.355446 1362600 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:25:21.085475 1362600 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:25:21.119415 1362600 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:25:21.119762 1362600 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-790322 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:25:21.408519 1362600 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:25:21.408860 1362600 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-790322 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:25:21.881346 1362600 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:25:22.842139 1362600 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:25:19.453601 1365166 out.go:252] * Restarting existing docker container for "no-preload-947754" ...
	I1027 23:25:19.453682 1365166 cli_runner.go:164] Run: docker start no-preload-947754
	I1027 23:25:19.810128 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:19.847363 1365166 kic.go:430] container "no-preload-947754" state is running.
	I1027 23:25:19.847738 1365166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:25:19.880616 1365166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/config.json ...
	I1027 23:25:19.880862 1365166 machine.go:94] provisionDockerMachine start ...
	I1027 23:25:19.880931 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:19.921553 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:19.921870 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:19.921879 1365166 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:25:19.922520 1365166 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38170->127.0.0.1:34579: read: connection reset by peer
	I1027 23:25:23.082926 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:25:23.083009 1365166 ubuntu.go:182] provisioning hostname "no-preload-947754"
	I1027 23:25:23.083099 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:23.106135 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:23.106546 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:23.106563 1365166 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-947754 && echo "no-preload-947754" | sudo tee /etc/hostname
	I1027 23:25:23.289478 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-947754
	
	I1027 23:25:23.289591 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:23.317855 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:23.318196 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:23.318215 1365166 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-947754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-947754/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-947754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:25:23.491117 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:25:23.491145 1365166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:25:23.491164 1365166 ubuntu.go:190] setting up certificates
	I1027 23:25:23.491175 1365166 provision.go:84] configureAuth start
	I1027 23:25:23.491237 1365166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:25:23.513481 1365166 provision.go:143] copyHostCerts
	I1027 23:25:23.513552 1365166 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:25:23.513566 1365166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:25:23.513643 1365166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:25:23.513760 1365166 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:25:23.513771 1365166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:25:23.513799 1365166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:25:23.513870 1365166 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:25:23.513880 1365166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:25:23.513905 1365166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:25:23.513977 1365166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.no-preload-947754 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-947754]
	I1027 23:25:24.179516 1365166 provision.go:177] copyRemoteCerts
	I1027 23:25:24.179583 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:25:24.179640 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:24.198758 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:24.311331 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 23:25:24.332497 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:25:24.353201 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:25:24.373751 1365166 provision.go:87] duration metric: took 882.552025ms to configureAuth
	I1027 23:25:24.373828 1365166 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:25:24.374088 1365166 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:24.374241 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:24.398966 1365166 main.go:143] libmachine: Using SSH client type: native
	I1027 23:25:24.399274 1365166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34579 <nil> <nil>}
	I1027 23:25:24.399288 1365166 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:25:24.797023 1365166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:25:24.797067 1365166 machine.go:97] duration metric: took 4.916196293s to provisionDockerMachine
	I1027 23:25:24.797079 1365166 start.go:293] postStartSetup for "no-preload-947754" (driver="docker")
	I1027 23:25:24.797093 1365166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:25:24.797156 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:25:24.797216 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:24.825617 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:24.950574 1365166 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:25:24.956484 1365166 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:25:24.956511 1365166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:25:24.956522 1365166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:25:24.956584 1365166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:25:24.956665 1365166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:25:24.956772 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:25:24.968815 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:25:24.995815 1365166 start.go:296] duration metric: took 198.717404ms for postStartSetup
	I1027 23:25:24.995935 1365166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:25:24.996007 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:25.037367 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:25.148325 1365166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:25:25.154177 1365166 fix.go:57] duration metric: took 5.732334657s for fixHost
	I1027 23:25:25.154204 1365166 start.go:83] releasing machines lock for "no-preload-947754", held for 5.732388557s
	I1027 23:25:25.154296 1365166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-947754
	I1027 23:25:25.185834 1365166 ssh_runner.go:195] Run: cat /version.json
	I1027 23:25:25.185897 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:25.186211 1365166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:25:25.186276 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:25.222930 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:25.234570 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:25.424887 1365166 ssh_runner.go:195] Run: systemctl --version
	I1027 23:25:25.432210 1365166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:25:25.475187 1365166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:25:25.480249 1365166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:25:25.480326 1365166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:25:25.489027 1365166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:25:25.489060 1365166 start.go:496] detecting cgroup driver to use...
	I1027 23:25:25.489090 1365166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:25:25.489161 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:25:25.505803 1365166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:25:25.520829 1365166 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:25:25.520911 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:25:25.537670 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:25:25.552199 1365166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:25:25.717136 1365166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:25:25.877758 1365166 docker.go:234] disabling docker service ...
	I1027 23:25:25.877831 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:25:25.900275 1365166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:25:25.913992 1365166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:25:26.091152 1365166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:25:26.267117 1365166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:25:26.291175 1365166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:25:26.312211 1365166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:25:26.312286 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.325382 1365166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:25:26.325493 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.336751 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.347791 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.362519 1365166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:25:26.372118 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.385234 1365166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.394584 1365166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:25:26.405294 1365166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:25:26.415387 1365166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:25:26.425280 1365166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:26.610859 1365166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:25:26.844157 1365166 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:25:26.844234 1365166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:25:26.848147 1365166 start.go:564] Will wait 60s for crictl version
	I1027 23:25:26.848223 1365166 ssh_runner.go:195] Run: which crictl
	I1027 23:25:26.855594 1365166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:25:26.907703 1365166 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:25:26.907810 1365166 ssh_runner.go:195] Run: crio --version
	I1027 23:25:26.956111 1365166 ssh_runner.go:195] Run: crio --version
	I1027 23:25:27.007261 1365166 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:25:23.930714 1362600 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:25:23.930794 1362600 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:25:24.414096 1362600 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:25:25.842978 1362600 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:25:26.257716 1362600 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:25:27.150763 1362600 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:25:27.673025 1362600 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:25:27.674165 1362600 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:25:27.684523 1362600 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:25:27.688656 1362600 out.go:252]   - Booting up control plane ...
	I1027 23:25:27.688765 1362600 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:25:27.693058 1362600 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:25:27.693142 1362600 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:25:27.715487 1362600 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:25:27.715822 1362600 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:25:27.728206 1362600 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:25:27.728539 1362600 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:25:27.728767 1362600 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:25:27.931241 1362600 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:25:27.931366 1362600 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:25:27.010365 1365166 cli_runner.go:164] Run: docker network inspect no-preload-947754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:25:27.036257 1365166 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 23:25:27.041576 1365166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:25:27.054237 1365166 kubeadm.go:884] updating cluster {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:25:27.054355 1365166 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:25:27.054519 1365166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:25:27.100009 1365166 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:25:27.100032 1365166 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:25:27.100040 1365166 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 23:25:27.100148 1365166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-947754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:25:27.100231 1365166 ssh_runner.go:195] Run: crio config
	I1027 23:25:27.175054 1365166 cni.go:84] Creating CNI manager for ""
	I1027 23:25:27.175123 1365166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:25:27.175174 1365166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:25:27.175223 1365166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-947754 NodeName:no-preload-947754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:25:27.175389 1365166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-947754"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:25:27.175481 1365166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:25:27.183800 1365166 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:25:27.183913 1365166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:25:27.191845 1365166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:25:27.225805 1365166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:25:27.244890 1365166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 23:25:27.262203 1365166 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:25:27.266931 1365166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:25:27.282733 1365166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:27.429222 1365166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:25:27.464538 1365166 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754 for IP: 192.168.76.2
	I1027 23:25:27.464561 1365166 certs.go:195] generating shared ca certs ...
	I1027 23:25:27.464586 1365166 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:27.464772 1365166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:25:27.464838 1365166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:25:27.464852 1365166 certs.go:257] generating profile certs ...
	I1027 23:25:27.464981 1365166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/client.key
	I1027 23:25:27.465066 1365166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key.2667a321
	I1027 23:25:27.465119 1365166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key
	I1027 23:25:27.465256 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:25:27.465308 1365166 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:25:27.465322 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:25:27.465367 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:25:27.465399 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:25:27.465450 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:25:27.465522 1365166 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:25:27.466362 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:25:27.499514 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:25:27.552644 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:25:27.590207 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:25:27.623785 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:25:27.671572 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 23:25:27.724545 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:25:27.782945 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/no-preload-947754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:25:27.835374 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:25:27.859346 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:25:27.887182 1365166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:25:27.938630 1365166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:25:27.955705 1365166 ssh_runner.go:195] Run: openssl version
	I1027 23:25:27.964465 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:25:27.974931 1365166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:25:27.979930 1365166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:25:27.980004 1365166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:25:28.036468 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:25:28.045021 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:25:28.054117 1365166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:25:28.059004 1365166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:25:28.059080 1365166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:25:28.101354 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:25:28.109981 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:25:28.118736 1365166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:25:28.124086 1365166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:25:28.124168 1365166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:25:28.180415 1365166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:25:28.192284 1365166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:25:28.196528 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:25:28.280376 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:25:28.354946 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:25:28.443567 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:25:28.515350 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:25:28.573565 1365166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:25:28.665296 1365166 kubeadm.go:401] StartCluster: {Name:no-preload-947754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-947754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:25:28.665404 1365166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:25:28.665521 1365166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:25:28.736721 1365166 cri.go:89] found id: "9f23df14f2981858d26fa46d7024756723417501e064c150efed848207a12d0c"
	I1027 23:25:28.736749 1365166 cri.go:89] found id: "8d31e22ed9a43d906de78edcbe062d2a70163bf79ab57e9dd6ef2531387faeea"
	I1027 23:25:28.736756 1365166 cri.go:89] found id: "cf6586816133757006922d7552cfb82bf56a3f786053d6ff45e949dbf3a4d391"
	I1027 23:25:28.736761 1365166 cri.go:89] found id: "753952329c8042b52b9f0e7089396f8c95422ec863eda044f175ca5860a37dda"
	I1027 23:25:28.736767 1365166 cri.go:89] found id: ""
	I1027 23:25:28.736853 1365166 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:25:28.764054 1365166 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:25:28Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:25:28.764149 1365166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:25:28.787070 1365166 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:25:28.787094 1365166 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:25:28.787164 1365166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:25:28.823885 1365166 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:25:28.824390 1365166 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-947754" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:28.824533 1365166 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-947754" cluster setting kubeconfig missing "no-preload-947754" context setting]
	I1027 23:25:28.824856 1365166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:28.826537 1365166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:25:28.868417 1365166 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 23:25:28.868462 1365166 kubeadm.go:602] duration metric: took 81.361246ms to restartPrimaryControlPlane
	I1027 23:25:28.868472 1365166 kubeadm.go:403] duration metric: took 203.187948ms to StartCluster
	I1027 23:25:28.868487 1365166 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:28.868556 1365166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:28.869240 1365166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:28.869474 1365166 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:25:28.869887 1365166 config.go:182] Loaded profile config "no-preload-947754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:28.869870 1365166 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:25:28.870012 1365166 addons.go:69] Setting storage-provisioner=true in profile "no-preload-947754"
	I1027 23:25:28.870030 1365166 addons.go:238] Setting addon storage-provisioner=true in "no-preload-947754"
	W1027 23:25:28.870037 1365166 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:25:28.870060 1365166 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:25:28.870532 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.870731 1365166 addons.go:69] Setting dashboard=true in profile "no-preload-947754"
	I1027 23:25:28.870773 1365166 addons.go:238] Setting addon dashboard=true in "no-preload-947754"
	W1027 23:25:28.870799 1365166 addons.go:247] addon dashboard should already be in state true
	I1027 23:25:28.870836 1365166 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:25:28.871099 1365166 addons.go:69] Setting default-storageclass=true in profile "no-preload-947754"
	I1027 23:25:28.871126 1365166 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-947754"
	I1027 23:25:28.871335 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.871440 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.875082 1365166 out.go:179] * Verifying Kubernetes components...
	I1027 23:25:28.878279 1365166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:28.930202 1365166 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:25:28.934205 1365166 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:25:28.937159 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:25:28.937184 1365166 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:25:28.937166 1365166 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:25:28.937252 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:28.941601 1365166 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:28.941626 1365166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:25:28.941691 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:28.943937 1365166 addons.go:238] Setting addon default-storageclass=true in "no-preload-947754"
	W1027 23:25:28.943957 1365166 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:25:28.943982 1365166 host.go:66] Checking if "no-preload-947754" exists ...
	I1027 23:25:28.944403 1365166 cli_runner.go:164] Run: docker container inspect no-preload-947754 --format={{.State.Status}}
	I1027 23:25:28.995021 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:28.997454 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:29.004154 1365166 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:29.004180 1365166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:25:29.004252 1365166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-947754
	I1027 23:25:29.035994 1365166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34579 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/no-preload-947754/id_rsa Username:docker}
	I1027 23:25:30.430727 1362600 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.500895657s
	I1027 23:25:30.432800 1362600 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:25:30.433154 1362600 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 23:25:30.433473 1362600 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:25:30.433771 1362600 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:25:29.384617 1365166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:25:29.440000 1365166 node_ready.go:35] waiting up to 6m0s for node "no-preload-947754" to be "Ready" ...
	I1027 23:25:29.451873 1365166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:29.512350 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:25:29.512379 1365166 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:25:29.572037 1365166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:29.621197 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:25:29.621224 1365166 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:25:29.714394 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:25:29.714418 1365166 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:25:29.816465 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:25:29.816493 1365166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:25:29.894572 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:25:29.894599 1365166 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:25:29.966665 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:25:29.966692 1365166 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:25:30.041450 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:25:30.041491 1365166 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:25:30.085222 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:25:30.085253 1365166 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:25:30.121719 1365166 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:25:30.121749 1365166 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:25:30.153298 1365166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:25:36.933475 1362600 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.499427779s
	I1027 23:25:36.210900 1365166 node_ready.go:49] node "no-preload-947754" is "Ready"
	I1027 23:25:36.210975 1365166 node_ready.go:38] duration metric: took 6.770941657s for node "no-preload-947754" to be "Ready" ...
	I1027 23:25:36.211004 1365166 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:25:36.211099 1365166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:25:39.711181 1365166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.259272583s)
	I1027 23:25:39.711284 1365166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.139220846s)
	I1027 23:25:39.711403 1365166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.558072475s)
	I1027 23:25:39.711442 1365166 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.500308546s)
	I1027 23:25:39.711712 1365166 api_server.go:72] duration metric: took 10.842208237s to wait for apiserver process to appear ...
	I1027 23:25:39.711740 1365166 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:25:39.711783 1365166 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 23:25:39.714565 1365166 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-947754 addons enable metrics-server
	
	I1027 23:25:39.734223 1365166 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 23:25:39.735470 1365166 api_server.go:141] control plane version: v1.34.1
	I1027 23:25:39.735540 1365166 api_server.go:131] duration metric: took 23.777271ms to wait for apiserver health ...
	I1027 23:25:39.735564 1365166 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:25:39.753119 1365166 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:25:39.888980 1362600 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 9.45479343s
	I1027 23:25:41.939270 1362600 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.505678373s
	I1027 23:25:41.965830 1362600 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:25:41.994710 1362600 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:25:42.041614 1362600 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:25:42.042154 1362600 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-790322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:25:42.072977 1362600 kubeadm.go:319] [bootstrap-token] Using token: 2pihna.mdcf9qb8cpwz02aw
	I1027 23:25:42.077949 1362600 out.go:252]   - Configuring RBAC rules ...
	I1027 23:25:42.078110 1362600 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:25:42.105071 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:25:42.137510 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:25:42.172857 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:25:42.191241 1362600 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:25:42.205040 1362600 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:25:42.360619 1362600 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:25:42.799152 1362600 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:25:43.352499 1362600 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:25:43.354205 1362600 kubeadm.go:319] 
	I1027 23:25:43.354284 1362600 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:25:43.354297 1362600 kubeadm.go:319] 
	I1027 23:25:43.354423 1362600 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:25:43.354431 1362600 kubeadm.go:319] 
	I1027 23:25:43.354457 1362600 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:25:43.357714 1362600 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:25:43.357782 1362600 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:25:43.357787 1362600 kubeadm.go:319] 
	I1027 23:25:43.357844 1362600 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:25:43.357849 1362600 kubeadm.go:319] 
	I1027 23:25:43.357919 1362600 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:25:43.357933 1362600 kubeadm.go:319] 
	I1027 23:25:43.357989 1362600 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:25:43.358067 1362600 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:25:43.358138 1362600 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:25:43.358142 1362600 kubeadm.go:319] 
	I1027 23:25:43.358519 1362600 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:25:43.358615 1362600 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:25:43.358621 1362600 kubeadm.go:319] 
	I1027 23:25:43.358941 1362600 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2pihna.mdcf9qb8cpwz02aw \
	I1027 23:25:43.359055 1362600 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:25:43.359270 1362600 kubeadm.go:319] 	--control-plane 
	I1027 23:25:43.359280 1362600 kubeadm.go:319] 
	I1027 23:25:43.359567 1362600 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:25:43.359577 1362600 kubeadm.go:319] 
	I1027 23:25:43.359871 1362600 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2pihna.mdcf9qb8cpwz02aw \
	I1027 23:25:43.360163 1362600 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:25:43.374364 1362600 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:25:43.374839 1362600 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:25:43.374971 1362600 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:25:43.374982 1362600 cni.go:84] Creating CNI manager for ""
	I1027 23:25:43.374990 1362600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:25:43.378597 1362600 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 23:25:39.754229 1365166 system_pods.go:59] 8 kube-system pods found
	I1027 23:25:39.754271 1365166 system_pods.go:61] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:25:39.754278 1365166 system_pods.go:61] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:25:39.754284 1365166 system_pods.go:61] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:25:39.754291 1365166 system_pods.go:61] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:25:39.754301 1365166 system_pods.go:61] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:25:39.754312 1365166 system_pods.go:61] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:25:39.754320 1365166 system_pods.go:61] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:25:39.754325 1365166 system_pods.go:61] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Running
	I1027 23:25:39.754338 1365166 system_pods.go:74] duration metric: took 18.754865ms to wait for pod list to return data ...
	I1027 23:25:39.754346 1365166 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:25:39.756002 1365166 addons.go:514] duration metric: took 10.88612916s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:25:39.760139 1365166 default_sa.go:45] found service account: "default"
	I1027 23:25:39.760219 1365166 default_sa.go:55] duration metric: took 5.841838ms for default service account to be created ...
	I1027 23:25:39.760244 1365166 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:25:39.776714 1365166 system_pods.go:86] 8 kube-system pods found
	I1027 23:25:39.776795 1365166 system_pods.go:89] "coredns-66bc5c9577-mzm5d" [7af0a1a1-b33d-4152-ac15-91c2455b2d4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:25:39.776817 1365166 system_pods.go:89] "etcd-no-preload-947754" [2be2c2d6-87dd-46e1-bc61-0b07f2a00a01] Running
	I1027 23:25:39.776841 1365166 system_pods.go:89] "kindnet-m7l4b" [baea7a6f-5608-4c48-bd36-abcd541e2d3b] Running
	I1027 23:25:39.776877 1365166 system_pods.go:89] "kube-apiserver-no-preload-947754" [19186a0e-373f-47f0-8e69-26a83b51bf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:25:39.776910 1365166 system_pods.go:89] "kube-controller-manager-no-preload-947754" [57f740fa-db37-4cbe-a187-a442c308ecc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:25:39.776930 1365166 system_pods.go:89] "kube-proxy-29878" [affca46b-bf6e-4821-a5e4-d7082cacdc04] Running
	I1027 23:25:39.776951 1365166 system_pods.go:89] "kube-scheduler-no-preload-947754" [62236697-12d4-40a2-b609-4cec58ee0277] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:25:39.776980 1365166 system_pods.go:89] "storage-provisioner" [7d8c57e3-c8ca-4466-9c32-fb227a39b7c5] Running
	I1027 23:25:39.777007 1365166 system_pods.go:126] duration metric: took 16.745122ms to wait for k8s-apps to be running ...
	I1027 23:25:39.777031 1365166 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:25:39.777115 1365166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:25:39.815077 1365166 system_svc.go:56] duration metric: took 38.037566ms WaitForService to wait for kubelet
	I1027 23:25:39.815161 1365166 kubeadm.go:587] duration metric: took 10.945656982s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:25:39.815195 1365166 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:25:39.829259 1365166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:25:39.829334 1365166 node_conditions.go:123] node cpu capacity is 2
	I1027 23:25:39.829362 1365166 node_conditions.go:105] duration metric: took 14.145857ms to run NodePressure ...
	I1027 23:25:39.829388 1365166 start.go:242] waiting for startup goroutines ...
	I1027 23:25:39.829422 1365166 start.go:247] waiting for cluster config update ...
	I1027 23:25:39.829455 1365166 start.go:256] writing updated cluster config ...
	I1027 23:25:39.829801 1365166 ssh_runner.go:195] Run: rm -f paused
	I1027 23:25:39.848349 1365166 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:25:39.862712 1365166 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 23:25:41.904658 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	I1027 23:25:43.382540 1362600 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:25:43.386833 1362600 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:25:43.386852 1362600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:25:43.415217 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:25:43.764171 1362600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:25:43.764268 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:43.764305 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-790322 minikube.k8s.io/updated_at=2025_10_27T23_25_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=embed-certs-790322 minikube.k8s.io/primary=true
	I1027 23:25:43.973241 1362600 ops.go:34] apiserver oom_adj: -16
	I1027 23:25:43.973357 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:44.473727 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:44.973993 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:45.474445 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:45.974361 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:46.474291 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:46.973678 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:47.474118 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:47.973650 1362600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:25:48.289417 1362600 kubeadm.go:1114] duration metric: took 4.525207547s to wait for elevateKubeSystemPrivileges
	I1027 23:25:48.289443 1362600 kubeadm.go:403] duration metric: took 29.722300277s to StartCluster
	I1027 23:25:48.289460 1362600 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:48.289522 1362600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:25:48.290968 1362600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:25:48.291180 1362600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:25:48.291315 1362600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:25:48.291588 1362600 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:25:48.291798 1362600 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:25:48.291897 1362600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-790322"
	I1027 23:25:48.291930 1362600 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-790322"
	I1027 23:25:48.291984 1362600 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:25:48.292593 1362600 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:25:48.292016 1362600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-790322"
	I1027 23:25:48.293016 1362600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790322"
	I1027 23:25:48.293402 1362600 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:25:48.296665 1362600 out.go:179] * Verifying Kubernetes components...
	I1027 23:25:48.306599 1362600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:25:48.332706 1362600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1027 23:25:44.370678 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:46.869171 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	I1027 23:25:48.336024 1362600 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:48.336048 1362600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:25:48.336110 1362600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:25:48.337508 1362600 addons.go:238] Setting addon default-storageclass=true in "embed-certs-790322"
	I1027 23:25:48.337543 1362600 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:25:48.338003 1362600 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:25:48.378996 1362600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34574 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:25:48.386549 1362600 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:48.386572 1362600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:25:48.386639 1362600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:25:48.413428 1362600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34574 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:25:49.059236 1362600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:25:49.059361 1362600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:25:49.064120 1362600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:25:49.107300 1362600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:25:49.177604 1362600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:25:50.189503 1362600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.130035001s)
	I1027 23:25:50.189607 1362600 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 23:25:50.524920 1362600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.34720525s)
	I1027 23:25:50.525244 1362600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.461093501s)
	I1027 23:25:50.554486 1362600 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:25:50.557377 1362600 addons.go:514] duration metric: took 2.265561543s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:25:50.694351 1362600 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-790322" context rescaled to 1 replicas
	W1027 23:25:51.110875 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:49.369482 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:51.372755 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:53.869333 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:53.610615 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:56.110990 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:58.111435 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:25:56.368271 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:25:58.868809 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:00.128724 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:02.611227 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:01.368672 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:03.868748 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:05.110211 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:07.111151 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:06.368257 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:08.378787 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	W1027 23:26:09.611257 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:12.110228 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:10.868218 1365166 pod_ready.go:104] pod "coredns-66bc5c9577-mzm5d" is not "Ready", error: <nil>
	I1027 23:26:12.368788 1365166 pod_ready.go:94] pod "coredns-66bc5c9577-mzm5d" is "Ready"
	I1027 23:26:12.368816 1365166 pod_ready.go:86] duration metric: took 32.506033244s for pod "coredns-66bc5c9577-mzm5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.371523 1365166 pod_ready.go:83] waiting for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.375993 1365166 pod_ready.go:94] pod "etcd-no-preload-947754" is "Ready"
	I1027 23:26:12.376032 1365166 pod_ready.go:86] duration metric: took 4.479965ms for pod "etcd-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.378299 1365166 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.383341 1365166 pod_ready.go:94] pod "kube-apiserver-no-preload-947754" is "Ready"
	I1027 23:26:12.383369 1365166 pod_ready.go:86] duration metric: took 5.041031ms for pod "kube-apiserver-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.385854 1365166 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.566652 1365166 pod_ready.go:94] pod "kube-controller-manager-no-preload-947754" is "Ready"
	I1027 23:26:12.566677 1365166 pod_ready.go:86] duration metric: took 180.759058ms for pod "kube-controller-manager-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:12.766683 1365166 pod_ready.go:83] waiting for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.167013 1365166 pod_ready.go:94] pod "kube-proxy-29878" is "Ready"
	I1027 23:26:13.167043 1365166 pod_ready.go:86] duration metric: took 400.333252ms for pod "kube-proxy-29878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.367335 1365166 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.766796 1365166 pod_ready.go:94] pod "kube-scheduler-no-preload-947754" is "Ready"
	I1027 23:26:13.766830 1365166 pod_ready.go:86] duration metric: took 399.467238ms for pod "kube-scheduler-no-preload-947754" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:26:13.766844 1365166 pod_ready.go:40] duration metric: took 33.918408882s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:26:13.824702 1365166 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:26:13.830035 1365166 out.go:179] * Done! kubectl is now configured to use "no-preload-947754" cluster and "default" namespace by default
	W1027 23:26:14.111076 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:16.611293 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:19.110066 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:21.110786 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:23.110935 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:25.111080 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	W1027 23:26:27.610982 1362600 node_ready.go:57] node "embed-certs-790322" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.818351148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.834491339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.835423487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.851407597Z" level=info msg="Created container 95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx/dashboard-metrics-scraper" id=f1e0d07c-24c5-45e3-a883-c8cfccd364b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.852669605Z" level=info msg="Starting container: 95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce" id=c18e9eb4-c34a-4798-b433-3d2a70b6dd52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:26:14 no-preload-947754 crio[650]: time="2025-10-27T23:26:14.856174105Z" level=info msg="Started container" PID=1658 containerID=95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx/dashboard-metrics-scraper id=c18e9eb4-c34a-4798-b433-3d2a70b6dd52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9
	Oct 27 23:26:14 no-preload-947754 conmon[1656]: conmon 95d9328dd9ac768fcd96 <ninfo>: container 1658 exited with status 1
	Oct 27 23:26:15 no-preload-947754 crio[650]: time="2025-10-27T23:26:15.145871575Z" level=info msg="Removing container: c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084" id=76bd515a-6284-4bee-9f58-9eb92422bb4e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:26:15 no-preload-947754 crio[650]: time="2025-10-27T23:26:15.157143414Z" level=info msg="Error loading conmon cgroup of container c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084: cgroup deleted" id=76bd515a-6284-4bee-9f58-9eb92422bb4e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:26:15 no-preload-947754 crio[650]: time="2025-10-27T23:26:15.161688266Z" level=info msg="Removed container c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx/dashboard-metrics-scraper" id=76bd515a-6284-4bee-9f58-9eb92422bb4e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.452090951Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.458883891Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.458924909Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.458951429Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.462085963Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.462121123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.462155445Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.467104028Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.467141296Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.467168307Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.478053778Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.478091842Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.478117524Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.482062244Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:26:18 no-preload-947754 crio[650]: time="2025-10-27T23:26:18.482104928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	95d9328dd9ac7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   2                   9f3a0f88ad441       dashboard-metrics-scraper-6ffb444bf9-ls2dx   kubernetes-dashboard
	a9afcfa94ebd1       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   5cbea8e666633       storage-provisioner                          kube-system
	d820306abf607       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   6169a4d9afc1b       kubernetes-dashboard-855c9754f9-zxvvw        kubernetes-dashboard
	dce502a098734       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   4c3268cc79490       coredns-66bc5c9577-mzm5d                     kube-system
	1d5289ac78c72       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   6f61bfdd93ec6       busybox                                      default
	411070ec7a49e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   5cbea8e666633       storage-provisioner                          kube-system
	72419b65a3b57       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   4a9045afbc941       kindnet-m7l4b                                kube-system
	f06617fb88cc0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   154a97e76c812       kube-proxy-29878                             kube-system
	9f23df14f2981       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f0634a0001467       kube-apiserver-no-preload-947754             kube-system
	8d31e22ed9a43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   476d6455852bf       kube-controller-manager-no-preload-947754    kube-system
	cf65868161337       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   285e1a0f4ccd0       etcd-no-preload-947754                       kube-system
	753952329c804       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   51180ffafaf96       kube-scheduler-no-preload-947754             kube-system
	
	
	==> coredns [dce502a0987347d98c1fadd581f5383d9c39aebc92f303d3c2f85a014ca708fd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56369 - 43646 "HINFO IN 5642184654014402772.6034745111912342011. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01432634s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-947754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-947754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=no-preload-947754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:24:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-947754
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:26:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:26:06 +0000   Mon, 27 Oct 2025 23:24:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-947754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c8ec03af-833c-45dd-b53c-bcc66992da89
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-mzm5d                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-947754                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-m7l4b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-947754              250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-947754     200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-29878                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-947754              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ls2dx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zxvvw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x7 over 2m11s)  kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x7 over 2m11s)  kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                     kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m                     kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    119s                   kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s                   kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  119s                   kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           116s                   node-controller  Node no-preload-947754 event: Registered Node no-preload-947754 in Controller
	  Normal   NodeReady                100s                   kubelet          Node no-preload-947754 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node no-preload-947754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node no-preload-947754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node no-preload-947754 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node no-preload-947754 event: Registered Node no-preload-947754 in Controller
	
	
	==> dmesg <==
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cf6586816133757006922d7552cfb82bf56a3f786053d6ff45e949dbf3a4d391] <==
	{"level":"warn","ts":"2025-10-27T23:25:32.716351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.837752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.884633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.932339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:32.980838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.006511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.037601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.072351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.108499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.145759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.192960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.227135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.270432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.312007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.345238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.394663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.448136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.519764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.602507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.684234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.786579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.819244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.874299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:33.962000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:34.182697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43094","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:26:31 up  6:09,  0 user,  load average: 4.59, 4.15, 3.37
	Linux no-preload-947754 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72419b65a3b57a571d664d92c78cb819499e775deac68bc21b2c1056c29b67bc] <==
	I1027 23:25:38.157982       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:25:38.162663       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:25:38.164422       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:25:38.164493       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:25:38.164531       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:25:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:25:38.451373       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:25:38.466002       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:25:38.466069       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:25:38.467006       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:26:08.452248       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:26:08.466849       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:26:08.466876       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:26:08.466986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 23:26:10.167241       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:26:10.167275       1 metrics.go:72] Registering metrics
	I1027 23:26:10.167347       1 controller.go:711] "Syncing nftables rules"
	I1027 23:26:18.451733       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:26:18.451800       1 main.go:301] handling current node
	I1027 23:26:28.451675       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:26:28.451737       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f23df14f2981858d26fa46d7024756723417501e064c150efed848207a12d0c] <==
	I1027 23:25:36.381515       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:25:36.393027       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:25:36.393130       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 23:25:36.393168       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:25:36.394685       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 23:25:36.394707       1 policy_source.go:240] refreshing policies
	I1027 23:25:36.395648       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:25:36.395937       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:25:36.419146       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:25:36.419185       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:25:36.428968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 23:25:36.452059       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:25:36.483172       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:25:36.599183       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:25:36.872379       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:25:38.572198       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:25:38.818146       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:25:39.094205       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:25:39.170895       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:25:39.564954       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.229.20"}
	I1027 23:25:39.603424       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.238.198"}
	W1027 23:25:39.616639       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 23:25:39.618246       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:25:41.368678       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:25:41.676857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8d31e22ed9a43d906de78edcbe062d2a70163bf79ab57e9dd6ef2531387faeea] <==
	I1027 23:25:41.215313       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:25:41.215750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 23:25:41.220414       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:25:41.221643       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 23:25:41.225417       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 23:25:41.227629       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:25:41.236634       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:25:41.238530       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:25:41.247348       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 23:25:41.253737       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:25:41.253832       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:25:41.253958       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:25:41.254083       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-947754"
	I1027 23:25:41.254161       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 23:25:41.254795       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:25:41.254906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:25:41.254920       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:25:41.256194       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:25:41.256637       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 23:25:41.261561       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:25:41.262297       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:25:41.267491       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:25:41.291008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:25:41.291126       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:25:41.291157       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [f06617fb88cc02987c92472c35f87309338616d5e8dbb92304621d4132735bbb] <==
	I1027 23:25:40.194705       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:25:40.554447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:25:40.663782       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:25:40.663826       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:25:40.663921       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:25:41.228538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:25:41.231706       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:25:41.268592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:25:41.269232       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:25:41.270132       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:25:41.286926       1 config.go:200] "Starting service config controller"
	I1027 23:25:41.287548       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:25:41.287632       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:25:41.287679       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:25:41.287746       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:25:41.287775       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:25:41.292912       1 config.go:309] "Starting node config controller"
	I1027 23:25:41.293596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:25:41.293632       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:25:41.391533       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:25:41.391628       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:25:41.391654       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [753952329c8042b52b9f0e7089396f8c95422ec863eda044f175ca5860a37dda] <==
	I1027 23:25:38.470733       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:25:42.925931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:25:42.925968       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:25:42.935316       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:25:42.935417       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:25:42.935445       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:25:42.935488       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:25:42.974031       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:25:42.974064       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:25:42.974120       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:25:42.974128       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:25:43.036401       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:25:43.074473       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:25:43.074537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168532     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9jp\" (UniqueName: \"kubernetes.io/projected/fbebe4c7-b069-41ce-a789-cdbad9d17eb5-kube-api-access-6l9jp\") pod \"dashboard-metrics-scraper-6ffb444bf9-ls2dx\" (UID: \"fbebe4c7-b069-41ce-a789-cdbad9d17eb5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168594     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96tj9\" (UniqueName: \"kubernetes.io/projected/4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31-kube-api-access-96tj9\") pod \"kubernetes-dashboard-855c9754f9-zxvvw\" (UID: \"4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zxvvw"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168623     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fbebe4c7-b069-41ce-a789-cdbad9d17eb5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ls2dx\" (UID: \"fbebe4c7-b069-41ce-a789-cdbad9d17eb5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: I1027 23:25:43.168675     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zxvvw\" (UID: \"4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zxvvw"
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: W1027 23:25:43.423401     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/crio-6169a4d9afc1b400f1f202e0441af71dc32d112e082ce9b2fefc2bf232e6098a WatchSource:0}: Error finding container 6169a4d9afc1b400f1f202e0441af71dc32d112e082ce9b2fefc2bf232e6098a: Status 404 returned error can't find the container with id 6169a4d9afc1b400f1f202e0441af71dc32d112e082ce9b2fefc2bf232e6098a
	Oct 27 23:25:43 no-preload-947754 kubelet[767]: W1027 23:25:43.454551     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c73891b58ca0c1e3771a12326dc198fce283cad5a3a64ea4e206ff4e2ad2bdcd/crio-9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9 WatchSource:0}: Error finding container 9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9: Status 404 returned error can't find the container with id 9f3a0f88ad441f594e64513a1139d5bf7a5bc886062ee1e5b678d9833abfa4f9
	Oct 27 23:25:51 no-preload-947754 kubelet[767]: I1027 23:25:51.083604     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zxvvw" podStartSLOduration=3.193574978 podStartE2EDuration="10.083587142s" podCreationTimestamp="2025-10-27 23:25:41 +0000 UTC" firstStartedPulling="2025-10-27 23:25:43.4294614 +0000 UTC m=+15.974640224" lastFinishedPulling="2025-10-27 23:25:50.319473564 +0000 UTC m=+22.864652388" observedRunningTime="2025-10-27 23:25:51.081133479 +0000 UTC m=+23.626312303" watchObservedRunningTime="2025-10-27 23:25:51.083587142 +0000 UTC m=+23.628765974"
	Oct 27 23:25:56 no-preload-947754 kubelet[767]: I1027 23:25:56.082366     767 scope.go:117] "RemoveContainer" containerID="779e4d613c1da5c39da3dd9d90eb8a837ca3e84a99b61a1c7c08228a6c454e0d"
	Oct 27 23:25:57 no-preload-947754 kubelet[767]: I1027 23:25:57.086144     767 scope.go:117] "RemoveContainer" containerID="779e4d613c1da5c39da3dd9d90eb8a837ca3e84a99b61a1c7c08228a6c454e0d"
	Oct 27 23:25:57 no-preload-947754 kubelet[767]: I1027 23:25:57.086744     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:25:57 no-preload-947754 kubelet[767]: E1027 23:25:57.086922     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:25:58 no-preload-947754 kubelet[767]: I1027 23:25:58.090173     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:25:58 no-preload-947754 kubelet[767]: E1027 23:25:58.090336     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:03 no-preload-947754 kubelet[767]: I1027 23:26:03.371082     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:26:03 no-preload-947754 kubelet[767]: E1027 23:26:03.371276     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:09 no-preload-947754 kubelet[767]: I1027 23:26:09.124574     767 scope.go:117] "RemoveContainer" containerID="411070ec7a49e4f7f558d049d91a93e52b7f68d46532edcf9784b3a28da65fe6"
	Oct 27 23:26:14 no-preload-947754 kubelet[767]: I1027 23:26:14.814967     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:26:15 no-preload-947754 kubelet[767]: I1027 23:26:15.143518     767 scope.go:117] "RemoveContainer" containerID="c494bf0e9a0ae4235582055e5637aefda392e725d01389766fd626081efd7084"
	Oct 27 23:26:15 no-preload-947754 kubelet[767]: I1027 23:26:15.143815     767 scope.go:117] "RemoveContainer" containerID="95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce"
	Oct 27 23:26:15 no-preload-947754 kubelet[767]: E1027 23:26:15.143986     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:23 no-preload-947754 kubelet[767]: I1027 23:26:23.370436     767 scope.go:117] "RemoveContainer" containerID="95d9328dd9ac768fcd96be887568f43b7a718761d9ae83cb1ca842b6af910fce"
	Oct 27 23:26:23 no-preload-947754 kubelet[767]: E1027 23:26:23.370623     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2dx_kubernetes-dashboard(fbebe4c7-b069-41ce-a789-cdbad9d17eb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2dx" podUID="fbebe4c7-b069-41ce-a789-cdbad9d17eb5"
	Oct 27 23:26:26 no-preload-947754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:26:26 no-preload-947754 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:26:26 no-preload-947754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d820306abf607ac55bcab84f8735d57b9b838b6f2dcd5d7b45c692707223d95a] <==
	2025/10/27 23:25:50 Using namespace: kubernetes-dashboard
	2025/10/27 23:25:50 Using in-cluster config to connect to apiserver
	2025/10/27 23:25:50 Using secret token for csrf signing
	2025/10/27 23:25:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:25:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:25:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 23:25:50 Generating JWE encryption key
	2025/10/27 23:25:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:25:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:25:51 Initializing JWE encryption key from synchronized object
	2025/10/27 23:25:51 Creating in-cluster Sidecar client
	2025/10/27 23:25:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:25:51 Serving insecurely on HTTP port: 9090
	2025/10/27 23:26:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:25:50 Starting overwatch
	
	
	==> storage-provisioner [411070ec7a49e4f7f558d049d91a93e52b7f68d46532edcf9784b3a28da65fe6] <==
	I1027 23:25:38.797275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:26:08.807592       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a9afcfa94ebd1357f2da7111c52cf9032a26396ad5338a0fbec038de3eb2dfd0] <==
	I1027 23:26:09.177533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:26:09.196723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:26:09.196925       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:26:09.200442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:12.656784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:16.917647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:20.516147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:23.570108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:26.592158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:26.597579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:26.597741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:26:26.597902       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-947754_74ed6230-2ff8-4940-bc04-93941c6437a3!
	W1027 23:26:26.602936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:26.604096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77faea05-b4f8-4145-b717-91f936278f59", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-947754_74ed6230-2ff8-4940-bc04-93941c6437a3 became leader
	W1027 23:26:26.622622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:26.698539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-947754_74ed6230-2ff8-4940-bc04-93941c6437a3!
	W1027 23:26:28.626340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:28.633408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:30.636564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:30.648769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-947754 -n no-preload-947754
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-947754 -n no-preload-947754: exit status 2 (379.479154ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-947754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (638.046351ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:26:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-790322 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-790322 describe deploy/metrics-server -n kube-system: exit status 1 (95.464523ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-790322 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-790322
helpers_test.go:243: (dbg) docker inspect embed-certs-790322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88",
	        "Created": "2025-10-27T23:25:09.592548844Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1363628,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:25:09.686738563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/hosts",
	        "LogPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88-json.log",
	        "Name": "/embed-certs-790322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-790322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-790322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88",
	                "LowerDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-790322",
	                "Source": "/var/lib/docker/volumes/embed-certs-790322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-790322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-790322",
	                "name.minikube.sigs.k8s.io": "embed-certs-790322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84a8cf321f6926a9573405a3d44dedc746684e18f5ea9c227a7d950ca82738d0",
	            "SandboxKey": "/var/run/docker/netns/84a8cf321f69",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34574"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34575"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34578"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34576"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34577"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-790322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:11:c3:eb:f0:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49c1672ada24cf39a040b77c54572c8441994ff7afeb8ca5778d5d7aaf9fecd8",
	                    "EndpointID": "fc96520ebdfb8e54a4b861c5142320f29979b2ede4b16532e7277a8dbe81f359",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-790322",
	                        "f2a16ed0b5f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-790322 logs -n 25: (1.542448743s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-440075 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-440075                │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ stop    │ -p old-k8s-version-477179 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-440075                │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-440075                │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ ssh     │ -p bridge-440075 sudo crio config                                                                                                                                                                                                             │ bridge-440075                │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ delete  │ -p bridge-440075                                                                                                                                                                                                                              │ bridge-440075                │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:25 UTC │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ stop    │ -p no-preload-947754 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:26:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:26:35.850668 1369496 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:26:35.850816 1369496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:35.850844 1369496 out.go:374] Setting ErrFile to fd 2...
	I1027 23:26:35.850865 1369496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:35.851130 1369496 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:26:35.851595 1369496 out.go:368] Setting JSON to false
	I1027 23:26:35.852621 1369496 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22145,"bootTime":1761585451,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:26:35.852697 1369496 start.go:143] virtualization:  
	I1027 23:26:35.856530 1369496 out.go:179] * [default-k8s-diff-port-336451] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:26:35.859903 1369496 notify.go:221] Checking for updates...
	I1027 23:26:35.860946 1369496 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:26:35.864081 1369496 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:26:35.867050 1369496 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:26:35.870077 1369496 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:26:35.872843 1369496 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:26:35.875934 1369496 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:26:35.879430 1369496 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:26:35.879573 1369496 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:26:35.907073 1369496 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:26:35.907193 1369496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:35.967992 1369496 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:26:35.958569671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:35.968115 1369496 docker.go:318] overlay module found
	I1027 23:26:35.971338 1369496 out.go:179] * Using the docker driver based on user configuration
	I1027 23:26:35.974141 1369496 start.go:307] selected driver: docker
	I1027 23:26:35.974165 1369496 start.go:928] validating driver "docker" against <nil>
	I1027 23:26:35.974192 1369496 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:26:35.975101 1369496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:36.059386 1369496 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:26:36.044604613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:36.059539 1369496 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 23:26:36.059781 1369496 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:26:36.062771 1369496 out.go:179] * Using Docker driver with root privileges
	I1027 23:26:36.066854 1369496 cni.go:84] Creating CNI manager for ""
	I1027 23:26:36.066954 1369496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:26:36.066973 1369496 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 23:26:36.067067 1369496 start.go:351] cluster config:
	{Name:default-k8s-diff-port-336451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-336451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:26:36.070361 1369496 out.go:179] * Starting "default-k8s-diff-port-336451" primary control-plane node in "default-k8s-diff-port-336451" cluster
	I1027 23:26:36.073237 1369496 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:26:36.076204 1369496 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:26:36.079097 1369496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:26:36.079172 1369496 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:26:36.079184 1369496 cache.go:59] Caching tarball of preloaded images
	I1027 23:26:36.079195 1369496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:26:36.079272 1369496 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:26:36.079282 1369496 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:26:36.079381 1369496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/config.json ...
	I1027 23:26:36.079404 1369496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/config.json: {Name:mk0d9878336442927bdb407478e9b4ddf2b7f9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:26:36.100241 1369496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:26:36.100267 1369496 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:26:36.100287 1369496 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:26:36.100310 1369496 start.go:360] acquireMachinesLock for default-k8s-diff-port-336451: {Name:mkecd163bf05ad01d249b2c36cade7dcbe62d611 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:26:36.100431 1369496 start.go:364] duration metric: took 98.578µs to acquireMachinesLock for "default-k8s-diff-port-336451"
	I1027 23:26:36.100463 1369496 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-336451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-336451 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:26:36.100532 1369496 start.go:125] createHost starting for "" (driver="docker")
	I1027 23:26:36.105788 1369496 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 23:26:36.106082 1369496 start.go:159] libmachine.API.Create for "default-k8s-diff-port-336451" (driver="docker")
	I1027 23:26:36.106142 1369496 client.go:173] LocalClient.Create starting
	I1027 23:26:36.106240 1369496 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem
	I1027 23:26:36.106285 1369496 main.go:143] libmachine: Decoding PEM data...
	I1027 23:26:36.106307 1369496 main.go:143] libmachine: Parsing certificate...
	I1027 23:26:36.106368 1369496 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem
	I1027 23:26:36.106429 1369496 main.go:143] libmachine: Decoding PEM data...
	I1027 23:26:36.106441 1369496 main.go:143] libmachine: Parsing certificate...
	I1027 23:26:36.106840 1369496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-336451 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 23:26:36.123575 1369496 cli_runner.go:211] docker network inspect default-k8s-diff-port-336451 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 23:26:36.123664 1369496 network_create.go:284] running [docker network inspect default-k8s-diff-port-336451] to gather additional debugging logs...
	I1027 23:26:36.123683 1369496 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-336451
	W1027 23:26:36.140848 1369496 cli_runner.go:211] docker network inspect default-k8s-diff-port-336451 returned with exit code 1
	I1027 23:26:36.140902 1369496 network_create.go:287] error running [docker network inspect default-k8s-diff-port-336451]: docker network inspect default-k8s-diff-port-336451: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-336451 not found
	I1027 23:26:36.140916 1369496 network_create.go:289] output of [docker network inspect default-k8s-diff-port-336451]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-336451 not found
	
	** /stderr **
	I1027 23:26:36.141021 1369496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:26:36.158811 1369496 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bec5bade6d32 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:b8:32:37:d1:1a} reservation:<nil>}
	I1027 23:26:36.159150 1369496 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0dc359f1a23c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:03:b5:bc:b2:ab} reservation:<nil>}
	I1027 23:26:36.159500 1369496 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6865072e7c41 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:f3:83:1f:14:0e} reservation:<nil>}
	I1027 23:26:36.159931 1369496 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d99a0}
	I1027 23:26:36.159956 1369496 network_create.go:124] attempt to create docker network default-k8s-diff-port-336451 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 23:26:36.160014 1369496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-336451 default-k8s-diff-port-336451
	I1027 23:26:36.219192 1369496 network_create.go:108] docker network default-k8s-diff-port-336451 192.168.76.0/24 created
	I1027 23:26:36.219224 1369496 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-336451" container
	I1027 23:26:36.219309 1369496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 23:26:36.236142 1369496 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-336451 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-336451 --label created_by.minikube.sigs.k8s.io=true
	I1027 23:26:36.262338 1369496 oci.go:103] Successfully created a docker volume default-k8s-diff-port-336451
	I1027 23:26:36.262502 1369496 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-336451-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-336451 --entrypoint /usr/bin/test -v default-k8s-diff-port-336451:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 23:26:36.852519 1369496 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-336451
	I1027 23:26:36.852566 1369496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:26:36.852587 1369496 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 23:26:36.852660 1369496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-336451:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 27 23:26:29 embed-certs-790322 crio[839]: time="2025-10-27T23:26:29.817981093Z" level=info msg="Created container 0f147d6f28117c6d6345183263be66ffeab3357cf82e0765c8ad949146afe45c: kube-system/coredns-66bc5c9577-7czsv/coredns" id=0177fe93-f804-41b4-9046-ff8ea7a98d17 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:26:29 embed-certs-790322 crio[839]: time="2025-10-27T23:26:29.819260053Z" level=info msg="Starting container: 0f147d6f28117c6d6345183263be66ffeab3357cf82e0765c8ad949146afe45c" id=7be5fdf5-aebf-4d89-9f12-85600533dab4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:26:29 embed-certs-790322 crio[839]: time="2025-10-27T23:26:29.823698639Z" level=info msg="Started container" PID=1748 containerID=0f147d6f28117c6d6345183263be66ffeab3357cf82e0765c8ad949146afe45c description=kube-system/coredns-66bc5c9577-7czsv/coredns id=7be5fdf5-aebf-4d89-9f12-85600533dab4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15301c887054c2ccb14e1eb3079ade3c7217ba4a53163582e79f489d96ef13c0
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.888810994Z" level=info msg="Running pod sandbox: default/busybox/POD" id=217b74ae-93ed-455b-bc40-4177e036fbcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.888879836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.894539419Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1791bd3a3fb1230566bb80651844d73fb50aa35eb41c3734a15a243d4fb38d2c UID:99fa1637-d815-4323-b100-31f27733f2dc NetNS:/var/run/netns/f3d2b979-b503-4520-a890-8bbf086f309f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000f88a0}] Aliases:map[]}"
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.894575005Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.90339547Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1791bd3a3fb1230566bb80651844d73fb50aa35eb41c3734a15a243d4fb38d2c UID:99fa1637-d815-4323-b100-31f27733f2dc NetNS:/var/run/netns/f3d2b979-b503-4520-a890-8bbf086f309f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000f88a0}] Aliases:map[]}"
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.903542352Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.907795171Z" level=info msg="Ran pod sandbox 1791bd3a3fb1230566bb80651844d73fb50aa35eb41c3734a15a243d4fb38d2c with infra container: default/busybox/POD" id=217b74ae-93ed-455b-bc40-4177e036fbcd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.909121844Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=299155a9-f78d-4409-8b29-83f4667d1674 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.909252783Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=299155a9-f78d-4409-8b29-83f4667d1674 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.909293104Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=299155a9-f78d-4409-8b29-83f4667d1674 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.915288158Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b735db9-3d5b-4fde-8921-423b23f42232 name=/runtime.v1.ImageService/PullImage
	Oct 27 23:26:32 embed-certs-790322 crio[839]: time="2025-10-27T23:26:32.918088378Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.21060242Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3b735db9-3d5b-4fde-8921-423b23f42232 name=/runtime.v1.ImageService/PullImage
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.2128628Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4f61503e-2076-4656-9df8-7d5ebd03130c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.219691565Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=07718b0f-bc51-445e-9f0d-e281cfe58420 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.228558676Z" level=info msg="Creating container: default/busybox/busybox" id=15608708-2bf8-4c99-8b2a-d880fa8a87b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.22871602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.235105542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.235570853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.257001144Z" level=info msg="Created container 2761cd2dc3d5e65b93efee59c7e95a9fa844d0e0df5c1cf944ad774b75831192: default/busybox/busybox" id=15608708-2bf8-4c99-8b2a-d880fa8a87b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.264897428Z" level=info msg="Starting container: 2761cd2dc3d5e65b93efee59c7e95a9fa844d0e0df5c1cf944ad774b75831192" id=f554e154-c717-45e6-acf5-45f9662b193a name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:26:35 embed-certs-790322 crio[839]: time="2025-10-27T23:26:35.269719223Z" level=info msg="Started container" PID=1800 containerID=2761cd2dc3d5e65b93efee59c7e95a9fa844d0e0df5c1cf944ad774b75831192 description=default/busybox/busybox id=f554e154-c717-45e6-acf5-45f9662b193a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1791bd3a3fb1230566bb80651844d73fb50aa35eb41c3734a15a243d4fb38d2c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2761cd2dc3d5e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   1791bd3a3fb12       busybox                                      default
	0f147d6f28117       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   15301c887054c       coredns-66bc5c9577-7czsv                     kube-system
	bd313f08123a8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   53621b016f809       storage-provisioner                          kube-system
	e3a144fda09ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   ad320a3d6cadc       kindnet-l2rcj                                kube-system
	ce64bab471f14       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   8bb18c7984abd       kube-proxy-7lwt5                             kube-system
	db4a7f290079e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   625bb4dc5e23c       kube-controller-manager-embed-certs-790322   kube-system
	abd15c7dbdf87       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   e08b81d892166       etcd-embed-certs-790322                      kube-system
	6b3f2f58362c1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6a663f5e11778       kube-apiserver-embed-certs-790322            kube-system
	ba9a6223bbdc8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   46588315bfd53       kube-scheduler-embed-certs-790322            kube-system
	
	
	==> coredns [0f147d6f28117c6d6345183263be66ffeab3357cf82e0765c8ad949146afe45c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44383 - 59395 "HINFO IN 2430642412199553951.9211493837881061506. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039207329s
	
	
	==> describe nodes <==
	Name:               embed-certs-790322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-790322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=embed-certs-790322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_25_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:25:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-790322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:26:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:26:29 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:26:29 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:26:29 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:26:29 +0000   Mon, 27 Oct 2025 23:26:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-790322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                303b75c8-bfe7-43fd-a2ff-1f7c0bfb24ff
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-7czsv                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-790322                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-l2rcj                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-790322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-embed-certs-790322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-7lwt5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-790322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x8 over 74s)  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-790322 event: Registered Node embed-certs-790322 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-790322 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct27 23:01] overlayfs: idmapped layers are currently not supported
	[ +42.515610] overlayfs: idmapped layers are currently not supported
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [abd15c7dbdf871759932a951d1e2dd93d8cec2c8956f979edd7394dbd4903b3b] <==
	{"level":"warn","ts":"2025-10-27T23:25:36.054660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.170239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.215230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.267459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.331821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.410500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.479405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.601956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.626359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.693934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.762977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.831686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.903424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.945274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:36.979082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.012300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.059023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.112548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.165417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.200328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.246738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.283493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.333857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.402842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:25:37.597399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44306","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:26:44 up  6:09,  0 user,  load average: 4.11, 4.06, 3.36
	Linux embed-certs-790322 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3a144fda09ba5eec33e1e278c6fb783acc23460fc0e6a29ac1f5e83ddba4d7f] <==
	I1027 23:25:49.041996       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:25:49.042350       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:25:49.045198       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:25:49.045222       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:25:49.045239       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:25:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:25:49.237102       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:25:49.237136       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:25:49.237148       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:25:49.237276       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:26:19.226601       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:26:19.226605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:26:19.226824       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:26:19.227862       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 23:26:20.437323       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:26:20.437352       1 metrics.go:72] Registering metrics
	I1027 23:26:20.437427       1 controller.go:711] "Syncing nftables rules"
	I1027 23:26:29.230546       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:26:29.230658       1 main.go:301] handling current node
	I1027 23:26:39.226486       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:26:39.226606       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6b3f2f58362c1c97e148e2065d49ea4ca02fe73effe7f88564e03189d3df15d5] <==
	I1027 23:25:39.615668       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:25:39.667019       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:25:39.691492       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:25:39.731456       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1027 23:25:39.745587       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 23:25:39.796168       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:25:39.796895       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:25:39.816356       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:25:39.920304       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 23:25:39.945649       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 23:25:39.945678       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:25:41.390707       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:25:41.462435       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:25:41.634273       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 23:25:41.644683       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 23:25:41.645883       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:25:41.655824       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:25:41.997789       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:25:42.769215       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:25:42.796967       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 23:25:42.831314       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 23:25:47.695733       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 23:25:47.975921       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:25:48.003682       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:25:48.050292       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [db4a7f290079e2933f7ce7d9e13c88a9fb8bc0a6b6b8023fc840f11aaa48ab74] <==
	I1027 23:25:47.088291       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:25:47.088351       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 23:25:47.088475       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:25:47.088658       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:25:47.088877       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 23:25:47.092119       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 23:25:47.092197       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:25:47.092211       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:25:47.092940       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:25:47.093081       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:25:47.093171       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-790322"
	I1027 23:25:47.093205       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 23:25:47.093576       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:25:47.100482       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:25:47.100888       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 23:25:47.100937       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 23:25:47.100951       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:25:47.109113       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:25:47.110321       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:25:47.113722       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:25:47.113779       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:25:47.113785       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:25:47.113794       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:25:47.157209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:26:32.100786       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ce64bab471f14c7d7941e4f50171a406cc04e4e1e48a0af327c1d271967f1ae7] <==
	I1027 23:25:48.997961       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:25:49.148172       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:25:49.248886       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:25:49.248970       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 23:25:49.249137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:25:49.403678       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:25:49.403742       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:25:49.430720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:25:49.431041       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:25:49.431055       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:25:49.432224       1 config.go:200] "Starting service config controller"
	I1027 23:25:49.432237       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:25:49.432447       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:25:49.432454       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:25:49.432469       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:25:49.432473       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:25:49.433170       1 config.go:309] "Starting node config controller"
	I1027 23:25:49.433178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:25:49.433184       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:25:49.534635       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:25:49.534737       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:25:49.534751       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ba9a6223bbdc8bc539a37eff01a99f9898dd6cb1220bc8be85044acd361b17c6] <==
	E1027 23:25:39.857787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:25:39.865744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:25:39.865901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:25:39.866017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:25:39.866130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 23:25:39.866222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 23:25:39.866315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 23:25:39.866430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 23:25:39.866513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 23:25:39.866587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 23:25:39.866660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 23:25:39.866731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 23:25:39.870612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 23:25:39.870671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 23:25:39.870711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:25:39.878862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 23:25:39.879015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:25:39.879101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 23:25:40.747578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:25:40.756791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:25:40.821238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 23:25:40.828411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 23:25:40.852219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 23:25:41.146094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1027 23:25:44.143793       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: I1027 23:25:47.782661    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c50bbe3e-12b4-4007-aa20-dfd1b04d38aa-lib-modules\") pod \"kindnet-l2rcj\" (UID: \"c50bbe3e-12b4-4007-aa20-dfd1b04d38aa\") " pod="kube-system/kindnet-l2rcj"
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: I1027 23:25:47.782685    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c50bbe3e-12b4-4007-aa20-dfd1b04d38aa-cni-cfg\") pod \"kindnet-l2rcj\" (UID: \"c50bbe3e-12b4-4007-aa20-dfd1b04d38aa\") " pod="kube-system/kindnet-l2rcj"
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: I1027 23:25:47.782704    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c50bbe3e-12b4-4007-aa20-dfd1b04d38aa-xtables-lock\") pod \"kindnet-l2rcj\" (UID: \"c50bbe3e-12b4-4007-aa20-dfd1b04d38aa\") " pod="kube-system/kindnet-l2rcj"
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: I1027 23:25:47.883723    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d8f2c0d-30b5-487c-9d9e-e7be86b3be39-xtables-lock\") pod \"kube-proxy-7lwt5\" (UID: \"5d8f2c0d-30b5-487c-9d9e-e7be86b3be39\") " pod="kube-system/kube-proxy-7lwt5"
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: I1027 23:25:47.884150    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d8f2c0d-30b5-487c-9d9e-e7be86b3be39-lib-modules\") pod \"kube-proxy-7lwt5\" (UID: \"5d8f2c0d-30b5-487c-9d9e-e7be86b3be39\") " pod="kube-system/kube-proxy-7lwt5"
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: I1027 23:25:47.884344    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d8f2c0d-30b5-487c-9d9e-e7be86b3be39-kube-proxy\") pod \"kube-proxy-7lwt5\" (UID: \"5d8f2c0d-30b5-487c-9d9e-e7be86b3be39\") " pod="kube-system/kube-proxy-7lwt5"
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: I1027 23:25:47.884809    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjjt\" (UniqueName: \"kubernetes.io/projected/5d8f2c0d-30b5-487c-9d9e-e7be86b3be39-kube-api-access-bhjjt\") pod \"kube-proxy-7lwt5\" (UID: \"5d8f2c0d-30b5-487c-9d9e-e7be86b3be39\") " pod="kube-system/kube-proxy-7lwt5"
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: E1027 23:25:47.936569    1312 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: E1027 23:25:47.936624    1312 projected.go:196] Error preparing data for projected volume kube-api-access-x6k62 for pod kube-system/kindnet-l2rcj: configmap "kube-root-ca.crt" not found
	Oct 27 23:25:47 embed-certs-790322 kubelet[1312]: E1027 23:25:47.936715    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c50bbe3e-12b4-4007-aa20-dfd1b04d38aa-kube-api-access-x6k62 podName:c50bbe3e-12b4-4007-aa20-dfd1b04d38aa nodeName:}" failed. No retries permitted until 2025-10-27 23:25:48.436689093 +0000 UTC m=+5.723058485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x6k62" (UniqueName: "kubernetes.io/projected/c50bbe3e-12b4-4007-aa20-dfd1b04d38aa-kube-api-access-x6k62") pod "kindnet-l2rcj" (UID: "c50bbe3e-12b4-4007-aa20-dfd1b04d38aa") : configmap "kube-root-ca.crt" not found
	Oct 27 23:25:48 embed-certs-790322 kubelet[1312]: I1027 23:25:48.026077    1312 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 23:25:48 embed-certs-790322 kubelet[1312]: I1027 23:25:48.701172    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7lwt5" podStartSLOduration=1.7011488350000001 podStartE2EDuration="1.701148835s" podCreationTimestamp="2025-10-27 23:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:25:48.463844469 +0000 UTC m=+5.750213869" watchObservedRunningTime="2025-10-27 23:25:48.701148835 +0000 UTC m=+5.987518267"
	Oct 27 23:25:50 embed-certs-790322 kubelet[1312]: I1027 23:25:50.404227    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l2rcj" podStartSLOduration=3.404204621 podStartE2EDuration="3.404204621s" podCreationTimestamp="2025-10-27 23:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:25:49.336518281 +0000 UTC m=+6.622887681" watchObservedRunningTime="2025-10-27 23:25:50.404204621 +0000 UTC m=+7.690574013"
	Oct 27 23:26:29 embed-certs-790322 kubelet[1312]: I1027 23:26:29.280852    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 23:26:29 embed-certs-790322 kubelet[1312]: I1027 23:26:29.419255    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d42c557-cbb9-445c-8bd8-7b481a959c11-tmp\") pod \"storage-provisioner\" (UID: \"2d42c557-cbb9-445c-8bd8-7b481a959c11\") " pod="kube-system/storage-provisioner"
	Oct 27 23:26:29 embed-certs-790322 kubelet[1312]: I1027 23:26:29.419313    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2949488f-bf74-4218-b480-955908b58ac0-config-volume\") pod \"coredns-66bc5c9577-7czsv\" (UID: \"2949488f-bf74-4218-b480-955908b58ac0\") " pod="kube-system/coredns-66bc5c9577-7czsv"
	Oct 27 23:26:29 embed-certs-790322 kubelet[1312]: I1027 23:26:29.419338    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmmkg\" (UniqueName: \"kubernetes.io/projected/2949488f-bf74-4218-b480-955908b58ac0-kube-api-access-vmmkg\") pod \"coredns-66bc5c9577-7czsv\" (UID: \"2949488f-bf74-4218-b480-955908b58ac0\") " pod="kube-system/coredns-66bc5c9577-7czsv"
	Oct 27 23:26:29 embed-certs-790322 kubelet[1312]: I1027 23:26:29.419359    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkwzf\" (UniqueName: \"kubernetes.io/projected/2d42c557-cbb9-445c-8bd8-7b481a959c11-kube-api-access-jkwzf\") pod \"storage-provisioner\" (UID: \"2d42c557-cbb9-445c-8bd8-7b481a959c11\") " pod="kube-system/storage-provisioner"
	Oct 27 23:26:29 embed-certs-790322 kubelet[1312]: W1027 23:26:29.683797    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/crio-53621b016f809a9ee9ecba4a1cf0ffffc063700115df59fb94308aa18f385575 WatchSource:0}: Error finding container 53621b016f809a9ee9ecba4a1cf0ffffc063700115df59fb94308aa18f385575: Status 404 returned error can't find the container with id 53621b016f809a9ee9ecba4a1cf0ffffc063700115df59fb94308aa18f385575
	Oct 27 23:26:29 embed-certs-790322 kubelet[1312]: W1027 23:26:29.725096    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/crio-15301c887054c2ccb14e1eb3079ade3c7217ba4a53163582e79f489d96ef13c0 WatchSource:0}: Error finding container 15301c887054c2ccb14e1eb3079ade3c7217ba4a53163582e79f489d96ef13c0: Status 404 returned error can't find the container with id 15301c887054c2ccb14e1eb3079ade3c7217ba4a53163582e79f489d96ef13c0
	Oct 27 23:26:30 embed-certs-790322 kubelet[1312]: I1027 23:26:30.420646    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.420630297 podStartE2EDuration="40.420630297s" podCreationTimestamp="2025-10-27 23:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:26:30.400266499 +0000 UTC m=+47.686635891" watchObservedRunningTime="2025-10-27 23:26:30.420630297 +0000 UTC m=+47.706999697"
	Oct 27 23:26:32 embed-certs-790322 kubelet[1312]: I1027 23:26:32.578559    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7czsv" podStartSLOduration=44.578536268 podStartE2EDuration="44.578536268s" podCreationTimestamp="2025-10-27 23:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:26:30.423959829 +0000 UTC m=+47.710329229" watchObservedRunningTime="2025-10-27 23:26:32.578536268 +0000 UTC m=+49.864905668"
	Oct 27 23:26:32 embed-certs-790322 kubelet[1312]: I1027 23:26:32.655588    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8tz\" (UniqueName: \"kubernetes.io/projected/99fa1637-d815-4323-b100-31f27733f2dc-kube-api-access-tm8tz\") pod \"busybox\" (UID: \"99fa1637-d815-4323-b100-31f27733f2dc\") " pod="default/busybox"
	Oct 27 23:26:32 embed-certs-790322 kubelet[1312]: W1027 23:26:32.905351    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/crio-1791bd3a3fb1230566bb80651844d73fb50aa35eb41c3734a15a243d4fb38d2c WatchSource:0}: Error finding container 1791bd3a3fb1230566bb80651844d73fb50aa35eb41c3734a15a243d4fb38d2c: Status 404 returned error can't find the container with id 1791bd3a3fb1230566bb80651844d73fb50aa35eb41c3734a15a243d4fb38d2c
	Oct 27 23:26:35 embed-certs-790322 kubelet[1312]: I1027 23:26:35.416802    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.111569254 podStartE2EDuration="3.416783503s" podCreationTimestamp="2025-10-27 23:26:32 +0000 UTC" firstStartedPulling="2025-10-27 23:26:32.909689999 +0000 UTC m=+50.196059399" lastFinishedPulling="2025-10-27 23:26:35.214904256 +0000 UTC m=+52.501273648" observedRunningTime="2025-10-27 23:26:35.416062189 +0000 UTC m=+52.702431589" watchObservedRunningTime="2025-10-27 23:26:35.416783503 +0000 UTC m=+52.703152903"
	
	
	==> storage-provisioner [bd313f08123a8b724a1221a606b0e2ef7fe801da3f56e1254eababc08deebe2f] <==
	I1027 23:26:29.841733       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:26:29.917031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:26:29.917106       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:26:29.930876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:29.937611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:29.937998       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:26:29.938226       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-790322_05b4a307-c9cc-437c-b966-7d51445796c0!
	I1027 23:26:29.943483       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe00f650-32eb-4f9d-b262-03caa020ad86", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-790322_05b4a307-c9cc-437c-b966-7d51445796c0 became leader
	W1027 23:26:29.943655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:29.953880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:26:30.040933       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-790322_05b4a307-c9cc-437c-b966-7d51445796c0!
	W1027 23:26:31.956907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:31.964821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:33.969092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:33.982576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:35.986232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:35.993402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:37.996803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:38.001488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:40.016596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:40.032251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:42.042274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:42.088218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:44.091752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:26:44.096809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790322 -n embed-certs-790322
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-790322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-790322 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-790322 --alsologtostderr -v=1: exit status 80 (2.080117415s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-790322 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:28:10.592395 1375035 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:28:10.592512 1375035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:28:10.592523 1375035 out.go:374] Setting ErrFile to fd 2...
	I1027 23:28:10.592529 1375035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:28:10.592798 1375035 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:28:10.593155 1375035 out.go:368] Setting JSON to false
	I1027 23:28:10.593186 1375035 mustload.go:66] Loading cluster: embed-certs-790322
	I1027 23:28:10.593586 1375035 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:28:10.594082 1375035 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:28:10.617672 1375035 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:28:10.618048 1375035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:28:10.692943 1375035 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 23:28:10.682955093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:28:10.693587 1375035 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-790322 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 23:28:10.697038 1375035 out.go:179] * Pausing node embed-certs-790322 ... 
	I1027 23:28:10.700721 1375035 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:28:10.701105 1375035 ssh_runner.go:195] Run: systemctl --version
	I1027 23:28:10.701157 1375035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:28:10.738435 1375035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:28:10.850584 1375035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:28:10.869632 1375035 pause.go:52] kubelet running: true
	I1027 23:28:10.869708 1375035 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:28:11.201847 1375035 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:28:11.201949 1375035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:28:11.351934 1375035 cri.go:89] found id: "685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479"
	I1027 23:28:11.351954 1375035 cri.go:89] found id: "7cb3f092409e678570d4a74471cfdaa27f1dffbc700779b3a9bb259a5c2669ab"
	I1027 23:28:11.351960 1375035 cri.go:89] found id: "81dc02aac9076639d9e778fbd45c09fa3c0cf603955a2ad1a2dad43abd3483e3"
	I1027 23:28:11.351963 1375035 cri.go:89] found id: "dd862bc0975c47b020906fd67965252737767357cd14270fa3ebcf0e580227ec"
	I1027 23:28:11.351967 1375035 cri.go:89] found id: "a25501fea7b4d9ca522fa06ad5ad513cb99d9c3bdc51bc7296798233ca0230d1"
	I1027 23:28:11.351970 1375035 cri.go:89] found id: "99cfb8a94d79f6c5bfe51cd7b6b319af3c0441589946869eae5fa78fc69cdf42"
	I1027 23:28:11.351974 1375035 cri.go:89] found id: "2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c"
	I1027 23:28:11.351977 1375035 cri.go:89] found id: "04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248"
	I1027 23:28:11.351980 1375035 cri.go:89] found id: "4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109"
	I1027 23:28:11.351986 1375035 cri.go:89] found id: "54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	I1027 23:28:11.351989 1375035 cri.go:89] found id: "b97f21439a7b96012b6e8dfefc7cdd720fd915384d907a5cf119f81e99ecad9c"
	I1027 23:28:11.351993 1375035 cri.go:89] found id: ""
	I1027 23:28:11.352046 1375035 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:28:11.366545 1375035 retry.go:31] will retry after 216.981777ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:28:11Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:28:11.583971 1375035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:28:11.600305 1375035 pause.go:52] kubelet running: false
	I1027 23:28:11.600378 1375035 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:28:11.872983 1375035 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:28:11.873069 1375035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:28:11.982542 1375035 cri.go:89] found id: "685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479"
	I1027 23:28:11.982569 1375035 cri.go:89] found id: "7cb3f092409e678570d4a74471cfdaa27f1dffbc700779b3a9bb259a5c2669ab"
	I1027 23:28:11.982574 1375035 cri.go:89] found id: "81dc02aac9076639d9e778fbd45c09fa3c0cf603955a2ad1a2dad43abd3483e3"
	I1027 23:28:11.982577 1375035 cri.go:89] found id: "dd862bc0975c47b020906fd67965252737767357cd14270fa3ebcf0e580227ec"
	I1027 23:28:11.982586 1375035 cri.go:89] found id: "a25501fea7b4d9ca522fa06ad5ad513cb99d9c3bdc51bc7296798233ca0230d1"
	I1027 23:28:11.982591 1375035 cri.go:89] found id: "99cfb8a94d79f6c5bfe51cd7b6b319af3c0441589946869eae5fa78fc69cdf42"
	I1027 23:28:11.982594 1375035 cri.go:89] found id: "2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c"
	I1027 23:28:11.982597 1375035 cri.go:89] found id: "04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248"
	I1027 23:28:11.982601 1375035 cri.go:89] found id: "4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109"
	I1027 23:28:11.982607 1375035 cri.go:89] found id: "54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	I1027 23:28:11.982611 1375035 cri.go:89] found id: "b97f21439a7b96012b6e8dfefc7cdd720fd915384d907a5cf119f81e99ecad9c"
	I1027 23:28:11.982614 1375035 cri.go:89] found id: ""
	I1027 23:28:11.982676 1375035 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:28:11.996215 1375035 retry.go:31] will retry after 255.41855ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:28:11Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:28:12.252697 1375035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:28:12.269616 1375035 pause.go:52] kubelet running: false
	I1027 23:28:12.269676 1375035 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:28:12.501224 1375035 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:28:12.501311 1375035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:28:12.579347 1375035 cri.go:89] found id: "685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479"
	I1027 23:28:12.579367 1375035 cri.go:89] found id: "7cb3f092409e678570d4a74471cfdaa27f1dffbc700779b3a9bb259a5c2669ab"
	I1027 23:28:12.579372 1375035 cri.go:89] found id: "81dc02aac9076639d9e778fbd45c09fa3c0cf603955a2ad1a2dad43abd3483e3"
	I1027 23:28:12.579376 1375035 cri.go:89] found id: "dd862bc0975c47b020906fd67965252737767357cd14270fa3ebcf0e580227ec"
	I1027 23:28:12.579384 1375035 cri.go:89] found id: "a25501fea7b4d9ca522fa06ad5ad513cb99d9c3bdc51bc7296798233ca0230d1"
	I1027 23:28:12.579388 1375035 cri.go:89] found id: "99cfb8a94d79f6c5bfe51cd7b6b319af3c0441589946869eae5fa78fc69cdf42"
	I1027 23:28:12.579391 1375035 cri.go:89] found id: "2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c"
	I1027 23:28:12.579395 1375035 cri.go:89] found id: "04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248"
	I1027 23:28:12.579398 1375035 cri.go:89] found id: "4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109"
	I1027 23:28:12.579407 1375035 cri.go:89] found id: "54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	I1027 23:28:12.579411 1375035 cri.go:89] found id: "b97f21439a7b96012b6e8dfefc7cdd720fd915384d907a5cf119f81e99ecad9c"
	I1027 23:28:12.579413 1375035 cri.go:89] found id: ""
	I1027 23:28:12.579459 1375035 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:28:12.595381 1375035 out.go:203] 
	W1027 23:28:12.598316 1375035 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:28:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:28:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 23:28:12.598345 1375035 out.go:285] * 
	* 
	W1027 23:28:12.608261 1375035 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:28:12.611277 1375035 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-790322 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-790322
helpers_test.go:243: (dbg) docker inspect embed-certs-790322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88",
	        "Created": "2025-10-27T23:25:09.592548844Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1372248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:26:58.024355998Z",
	            "FinishedAt": "2025-10-27T23:26:56.962967944Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/hosts",
	        "LogPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88-json.log",
	        "Name": "/embed-certs-790322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-790322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-790322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88",
	                "LowerDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-790322",
	                "Source": "/var/lib/docker/volumes/embed-certs-790322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-790322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-790322",
	                "name.minikube.sigs.k8s.io": "embed-certs-790322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9c6a10432ae92d29bcf105db510e223adf32a22224e6daa6ddc959e54a6a67d",
	            "SandboxKey": "/var/run/docker/netns/b9c6a10432ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34589"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34590"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-790322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:89:b9:19:98:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49c1672ada24cf39a040b77c54572c8441994ff7afeb8ca5778d5d7aaf9fecd8",
	                    "EndpointID": "eefec1e90bffcb5fd648cbac499815ab57f6148fa11712e40f6b5acd6db02f95",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-790322",
	                        "f2a16ed0b5f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322: exit status 2 (446.30324ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-790322 logs -n 25: (1.791120446s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:25 UTC │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ stop    │ -p no-preload-947754 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:26:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:26:57.629666 1372118 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:26:57.630326 1372118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:57.630364 1372118 out.go:374] Setting ErrFile to fd 2...
	I1027 23:26:57.630435 1372118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:57.630762 1372118 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:26:57.631216 1372118 out.go:368] Setting JSON to false
	I1027 23:26:57.632240 1372118 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22167,"bootTime":1761585451,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:26:57.632349 1372118 start.go:143] virtualization:  
	I1027 23:26:57.635499 1372118 out.go:179] * [embed-certs-790322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:26:57.639638 1372118 notify.go:221] Checking for updates...
	I1027 23:26:57.640621 1372118 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:26:57.646013 1372118 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:26:57.649169 1372118 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:26:57.652247 1372118 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:26:57.655512 1372118 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:26:57.658358 1372118 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:26:57.661854 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:26:57.662570 1372118 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:26:57.719881 1372118 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:26:57.719979 1372118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:57.816133 1372118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:26:57.801869037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:57.816234 1372118 docker.go:318] overlay module found
	I1027 23:26:57.819654 1372118 out.go:179] * Using the docker driver based on existing profile
	I1027 23:26:57.822419 1372118 start.go:307] selected driver: docker
	I1027 23:26:57.822435 1372118 start.go:928] validating driver "docker" against &{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:26:57.822557 1372118 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:26:57.823249 1372118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:57.911780 1372118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:26:57.902033646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:57.912102 1372118 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:26:57.912132 1372118 cni.go:84] Creating CNI manager for ""
	I1027 23:26:57.912183 1372118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:26:57.912218 1372118 start.go:351] cluster config:
	{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:26:57.915350 1372118 out.go:179] * Starting "embed-certs-790322" primary control-plane node in "embed-certs-790322" cluster
	I1027 23:26:57.918215 1372118 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:26:57.921146 1372118 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:26:57.923980 1372118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:26:57.924038 1372118 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:26:57.924062 1372118 cache.go:59] Caching tarball of preloaded images
	I1027 23:26:57.924148 1372118 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:26:57.924157 1372118 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:26:57.924286 1372118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json ...
	I1027 23:26:57.924490 1372118 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:26:57.946720 1372118 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:26:57.946741 1372118 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:26:57.946755 1372118 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:26:57.946778 1372118 start.go:360] acquireMachinesLock for embed-certs-790322: {Name:mk0a741ca206e2e37bd9112a34c7fc5ed8359e78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:26:57.946830 1372118 start.go:364] duration metric: took 33.239µs to acquireMachinesLock for "embed-certs-790322"
	I1027 23:26:57.946849 1372118 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:26:57.946854 1372118 fix.go:55] fixHost starting: 
	I1027 23:26:57.947100 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:26:57.980727 1372118 fix.go:113] recreateIfNeeded on embed-certs-790322: state=Stopped err=<nil>
	W1027 23:26:57.980756 1372118 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:26:56.025667 1369496 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:26:56.026130 1369496 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:26:56.477016 1369496 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:26:56.671259 1369496 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:26:57.762794 1369496 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:26:58.081211 1369496 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:26:58.805554 1369496 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:26:58.808233 1369496 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:26:58.825117 1369496 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:26:58.828793 1369496 out.go:252]   - Booting up control plane ...
	I1027 23:26:58.828915 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:26:58.840658 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:26:58.842136 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:26:58.864049 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:26:58.864187 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:26:58.873660 1369496 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:26:58.874262 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:26:58.874539 1369496 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:26:59.080521 1369496 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:26:59.080651 1369496 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:27:00.581426 1369496 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501339765s
	I1027 23:27:00.584884 1369496 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:27:00.584976 1369496 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1027 23:27:00.585295 1369496 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:27:00.585396 1369496 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:26:57.983904 1372118 out.go:252] * Restarting existing docker container for "embed-certs-790322" ...
	I1027 23:26:57.983987 1372118 cli_runner.go:164] Run: docker start embed-certs-790322
	I1027 23:26:58.327945 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:26:58.366280 1372118 kic.go:430] container "embed-certs-790322" state is running.
	I1027 23:26:58.367082 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:26:58.400611 1372118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json ...
	I1027 23:26:58.400861 1372118 machine.go:94] provisionDockerMachine start ...
	I1027 23:26:58.400931 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:26:58.426994 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:26:58.427322 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:26:58.427331 1372118 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:26:58.428275 1372118 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50640->127.0.0.1:34589: read: connection reset by peer
	I1027 23:27:01.622790 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790322
	
	I1027 23:27:01.622827 1372118 ubuntu.go:182] provisioning hostname "embed-certs-790322"
	I1027 23:27:01.622918 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:01.668222 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:01.668540 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:01.668557 1372118 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-790322 && echo "embed-certs-790322" | sudo tee /etc/hostname
	I1027 23:27:01.880089 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790322
	
	I1027 23:27:01.880214 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:01.914678 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:01.914993 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:01.915017 1372118 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-790322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-790322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-790322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:27:02.100016 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:27:02.100086 1372118 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:27:02.100146 1372118 ubuntu.go:190] setting up certificates
	I1027 23:27:02.100174 1372118 provision.go:84] configureAuth start
	I1027 23:27:02.100252 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:27:02.126984 1372118 provision.go:143] copyHostCerts
	I1027 23:27:02.127050 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:27:02.127065 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:27:02.127143 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:27:02.127251 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:27:02.127257 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:27:02.127282 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:27:02.127340 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:27:02.127344 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:27:02.127366 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:27:02.127412 1372118 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.embed-certs-790322 san=[127.0.0.1 192.168.85.2 embed-certs-790322 localhost minikube]
	I1027 23:27:03.574875 1369496 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.98960737s
	I1027 23:27:02.724924 1372118 provision.go:177] copyRemoteCerts
	I1027 23:27:02.725053 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:27:02.725125 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:02.742703 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:02.855688 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:27:02.901503 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:27:02.931477 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:27:02.967998 1372118 provision.go:87] duration metric: took 867.785329ms to configureAuth
	I1027 23:27:02.968070 1372118 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:27:02.968305 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:02.968463 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:02.996153 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:02.996460 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:02.996478 1372118 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:27:03.467739 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:27:03.467809 1372118 machine.go:97] duration metric: took 5.066930053s to provisionDockerMachine
	I1027 23:27:03.467856 1372118 start.go:293] postStartSetup for "embed-certs-790322" (driver="docker")
	I1027 23:27:03.467893 1372118 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:27:03.467987 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:27:03.468071 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.493180 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.623500 1372118 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:27:03.627633 1372118 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:27:03.627671 1372118 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:27:03.627684 1372118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:27:03.627749 1372118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:27:03.627833 1372118 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:27:03.627947 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:27:03.644048 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:27:03.666091 1372118 start.go:296] duration metric: took 198.192776ms for postStartSetup
	I1027 23:27:03.666182 1372118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:27:03.666245 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.682357 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.791652 1372118 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:27:03.798570 1372118 fix.go:57] duration metric: took 5.851708801s for fixHost
	I1027 23:27:03.798605 1372118 start.go:83] releasing machines lock for "embed-certs-790322", held for 5.851767157s
	I1027 23:27:03.798684 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:27:03.828892 1372118 ssh_runner.go:195] Run: cat /version.json
	I1027 23:27:03.828957 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.829216 1372118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:27:03.829280 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.879957 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.888974 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:04.102180 1372118 ssh_runner.go:195] Run: systemctl --version
	I1027 23:27:04.115296 1372118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:27:04.181664 1372118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:27:04.191270 1372118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:27:04.191392 1372118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:27:04.204722 1372118 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:27:04.204802 1372118 start.go:496] detecting cgroup driver to use...
	I1027 23:27:04.204849 1372118 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:27:04.204926 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:27:04.220880 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:27:04.240791 1372118 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:27:04.240899 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:27:04.258648 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:27:04.286284 1372118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:27:04.454855 1372118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:27:04.644920 1372118 docker.go:234] disabling docker service ...
	I1027 23:27:04.645058 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:27:04.660850 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:27:04.675695 1372118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:27:04.868099 1372118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:27:05.063828 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:27:05.082647 1372118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:27:05.107749 1372118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:27:05.107822 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.121233 1372118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:27:05.121307 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.143748 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.160586 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.179086 1372118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:27:05.191735 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.207415 1372118 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.218949 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.235732 1372118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:27:05.248461 1372118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:27:05.264882 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:05.462697 1372118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:27:05.711167 1372118 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:27:05.711239 1372118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:27:05.715341 1372118 start.go:564] Will wait 60s for crictl version
	I1027 23:27:05.715407 1372118 ssh_runner.go:195] Run: which crictl
	I1027 23:27:05.718946 1372118 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:27:05.766824 1372118 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:27:05.766910 1372118 ssh_runner.go:195] Run: crio --version
	I1027 23:27:05.820172 1372118 ssh_runner.go:195] Run: crio --version
	I1027 23:27:05.871373 1372118 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:27:05.874464 1372118 cli_runner.go:164] Run: docker network inspect embed-certs-790322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:27:05.904076 1372118 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:27:05.908444 1372118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:27:05.923731 1372118 kubeadm.go:884] updating cluster {Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:27:05.923843 1372118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:27:05.923904 1372118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:27:06.009813 1372118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:27:06.009911 1372118 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:27:06.010028 1372118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:27:06.059961 1372118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:27:06.059987 1372118 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:27:06.059996 1372118 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:27:06.060099 1372118 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-790322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:27:06.060192 1372118 ssh_runner.go:195] Run: crio config
	I1027 23:27:06.181535 1372118 cni.go:84] Creating CNI manager for ""
	I1027 23:27:06.181558 1372118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:27:06.181577 1372118 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:27:06.181600 1372118 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-790322 NodeName:embed-certs-790322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:27:06.181732 1372118 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-790322"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:27:06.181812 1372118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:27:06.192912 1372118 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:27:06.192995 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:27:06.203308 1372118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 23:27:06.218584 1372118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:27:06.232422 1372118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 23:27:06.247296 1372118 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:27:06.251492 1372118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:27:06.261925 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:06.457092 1372118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:06.478856 1372118 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322 for IP: 192.168.85.2
	I1027 23:27:06.478875 1372118 certs.go:195] generating shared ca certs ...
	I1027 23:27:06.478891 1372118 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:06.479031 1372118 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:27:06.479080 1372118 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:27:06.479090 1372118 certs.go:257] generating profile certs ...
	I1027 23:27:06.479179 1372118 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/client.key
	I1027 23:27:06.479248 1372118 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.key.f07237cc
	I1027 23:27:06.479292 1372118 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.key
	I1027 23:27:06.479402 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:27:06.479436 1372118 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:27:06.479448 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:27:06.479471 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:27:06.479496 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:27:06.479722 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:27:06.479825 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:27:06.480838 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:27:06.546023 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:27:06.590814 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:27:06.650028 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:27:06.677604 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 23:27:06.733526 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:27:06.770512 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:27:06.794546 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:27:06.817673 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:27:06.845792 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:27:06.874996 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:27:06.907763 1372118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:27:06.939835 1372118 ssh_runner.go:195] Run: openssl version
	I1027 23:27:06.947898 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:27:06.961316 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:27:06.967846 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:27:06.967971 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:27:07.018751 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:27:07.027283 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:27:07.035876 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.040843 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.040991 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.085555 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:27:07.094489 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:27:07.103537 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.108009 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.108154 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.150730 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:27:07.160134 1372118 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:27:07.164988 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:27:07.214638 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:27:07.268298 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:27:07.344572 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:27:07.414155 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:27:07.508607 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:27:07.566964 1372118 kubeadm.go:401] StartCluster: {Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:27:07.567056 1372118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:27:07.567131 1372118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:27:07.721596 1372118 cri.go:89] found id: "2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c"
	I1027 23:27:07.721621 1372118 cri.go:89] found id: "04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248"
	I1027 23:27:07.721626 1372118 cri.go:89] found id: "4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109"
	I1027 23:27:07.721636 1372118 cri.go:89] found id: ""
	I1027 23:27:07.721689 1372118 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:27:07.809334 1372118 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:27:07Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:27:07.809421 1372118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:27:07.830014 1372118 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:27:07.830034 1372118 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:27:07.830105 1372118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:27:07.845122 1372118 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:27:07.845557 1372118 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-790322" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:07.845661 1372118 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-790322" cluster setting kubeconfig missing "embed-certs-790322" context setting]
	I1027 23:27:07.845942 1372118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.847319 1372118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:27:07.868638 1372118 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:27:07.868673 1372118 kubeadm.go:602] duration metric: took 38.632535ms to restartPrimaryControlPlane
	I1027 23:27:07.868682 1372118 kubeadm.go:403] duration metric: took 301.730067ms to StartCluster
	I1027 23:27:07.868697 1372118 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.868756 1372118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:07.869767 1372118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.869989 1372118 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:27:07.870257 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:07.870306 1372118 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:27:07.870424 1372118 addons.go:69] Setting dashboard=true in profile "embed-certs-790322"
	I1027 23:27:07.870374 1372118 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-790322"
	I1027 23:27:07.870449 1372118 addons.go:238] Setting addon dashboard=true in "embed-certs-790322"
	I1027 23:27:07.870456 1372118 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-790322"
	W1027 23:27:07.870457 1372118 addons.go:247] addon dashboard should already be in state true
	W1027 23:27:07.870462 1372118 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:27:07.870482 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.870485 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.870932 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.870947 1372118 addons.go:69] Setting default-storageclass=true in profile "embed-certs-790322"
	I1027 23:27:07.870960 1372118 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790322"
	I1027 23:27:07.871199 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.870934 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.874327 1372118 out.go:179] * Verifying Kubernetes components...
	I1027 23:27:07.877483 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:07.921642 1372118 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:27:07.923871 1372118 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:07.923902 1372118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:27:07.923973 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:07.926608 1372118 addons.go:238] Setting addon default-storageclass=true in "embed-certs-790322"
	W1027 23:27:07.926636 1372118 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:27:07.926662 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.927094 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.930680 1372118 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:27:07.934972 1372118 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:27:07.589168 1369496 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.003295676s
	I1027 23:27:08.586654 1369496 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.00161344s
	I1027 23:27:08.617820 1369496 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:27:08.651361 1369496 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:27:08.672815 1369496 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:27:08.673024 1369496 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-336451 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:27:08.695558 1369496 kubeadm.go:319] [bootstrap-token] Using token: j9lm8r.7dur7mpnl819twae
	I1027 23:27:08.698544 1369496 out.go:252]   - Configuring RBAC rules ...
	I1027 23:27:08.698661 1369496 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:27:08.705744 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:27:08.723693 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:27:08.731147 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:27:08.736342 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:27:08.745908 1369496 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:27:09.017778 1369496 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:27:09.574635 1369496 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:27:09.998756 1369496 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:27:10.000172 1369496 kubeadm.go:319] 
	I1027 23:27:10.000265 1369496 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:27:10.000277 1369496 kubeadm.go:319] 
	I1027 23:27:10.000361 1369496 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:27:10.000371 1369496 kubeadm.go:319] 
	I1027 23:27:10.000398 1369496 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:27:10.000892 1369496 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:27:10.000961 1369496 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:27:10.000971 1369496 kubeadm.go:319] 
	I1027 23:27:10.001030 1369496 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:27:10.001039 1369496 kubeadm.go:319] 
	I1027 23:27:10.001091 1369496 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:27:10.001099 1369496 kubeadm.go:319] 
	I1027 23:27:10.001163 1369496 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:27:10.001249 1369496 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:27:10.001327 1369496 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:27:10.001335 1369496 kubeadm.go:319] 
	I1027 23:27:10.001629 1369496 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:27:10.001721 1369496 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:27:10.001731 1369496 kubeadm.go:319] 
	I1027 23:27:10.002145 1369496 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token j9lm8r.7dur7mpnl819twae \
	I1027 23:27:10.002273 1369496 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:27:10.002492 1369496 kubeadm.go:319] 	--control-plane 
	I1027 23:27:10.002509 1369496 kubeadm.go:319] 
	I1027 23:27:10.002795 1369496 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:27:10.002815 1369496 kubeadm.go:319] 
	I1027 23:27:10.003080 1369496 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token j9lm8r.7dur7mpnl819twae \
	I1027 23:27:10.003401 1369496 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:27:10.009000 1369496 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:27:10.009283 1369496 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:27:10.009410 1369496 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:27:10.009468 1369496 cni.go:84] Creating CNI manager for ""
	I1027 23:27:10.009482 1369496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:27:10.013092 1369496 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 23:27:10.016073 1369496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:27:10.032899 1369496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:27:10.032926 1369496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:27:10.084560 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:27:10.555414 1369496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:27:10.555538 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:10.555613 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-336451 minikube.k8s.io/updated_at=2025_10_27T23_27_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=default-k8s-diff-port-336451 minikube.k8s.io/primary=true
	I1027 23:27:07.942570 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:27:07.942597 1372118 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:27:07.942676 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:07.970507 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:07.977164 1372118 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:07.977185 1372118 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:27:07.977247 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:08.010762 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:08.030543 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:08.342954 1372118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:08.363752 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:08.405304 1372118 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:27:08.479620 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:27:08.479646 1372118 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:27:08.508486 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:08.515674 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:27:08.515702 1372118 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:27:08.610848 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:27:08.610914 1372118 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:27:08.743517 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:27:08.743586 1372118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:27:08.814050 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:27:08.814117 1372118 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:27:08.837148 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:27:08.837221 1372118 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:27:08.859763 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:27:08.859839 1372118 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:27:08.880028 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:27:08.880102 1372118 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:27:08.907564 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:27:08.907638 1372118 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:27:08.935516 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:27:10.876897 1369496 ops.go:34] apiserver oom_adj: -16
	I1027 23:27:10.876997 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:11.377135 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:11.877315 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:12.377098 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:12.877634 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:13.377806 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:13.877368 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:14.378067 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:14.628184 1369496 kubeadm.go:1114] duration metric: took 4.072679138s to wait for elevateKubeSystemPrivileges
	I1027 23:27:14.628211 1369496 kubeadm.go:403] duration metric: took 22.864632047s to StartCluster
	I1027 23:27:14.628228 1369496 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:14.628287 1369496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:14.629803 1369496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:14.630050 1369496 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:27:14.630138 1369496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:27:14.630441 1369496 config.go:182] Loaded profile config "default-k8s-diff-port-336451": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:14.630483 1369496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:27:14.630541 1369496 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-336451"
	I1027 23:27:14.630555 1369496 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-336451"
	I1027 23:27:14.630575 1369496 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:27:14.631062 1369496 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-336451"
	I1027 23:27:14.631080 1369496 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-336451"
	I1027 23:27:14.631353 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.631693 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.635148 1369496 out.go:179] * Verifying Kubernetes components...
	I1027 23:27:14.638515 1369496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:14.668067 1369496 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-336451"
	I1027 23:27:14.668115 1369496 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:27:14.668539 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.675228 1369496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:27:14.680124 1369496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:14.680150 1369496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:27:14.680213 1369496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:27:14.704695 1369496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:14.704721 1369496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:27:14.704784 1369496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:27:14.731557 1369496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34584 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:27:14.742439 1369496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34584 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:27:15.224704 1369496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:15.318545 1369496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:15.390982 1369496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:27:15.391153 1369496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:16.939430 1369496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.714694755s)
	I1027 23:27:16.939476 1369496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.620913736s)
	I1027 23:27:16.939769 1369496 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.548578836s)
	I1027 23:27:16.940917 1369496 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:27:16.941165 1369496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.550112241s)
	I1027 23:27:16.941180 1369496 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 23:27:17.067100 1369496 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:27:13.874223 1372118 node_ready.go:49] node "embed-certs-790322" is "Ready"
	I1027 23:27:13.874298 1372118 node_ready.go:38] duration metric: took 5.468960816s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:27:13.874327 1372118 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:27:13.874432 1372118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:27:17.240012 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.876173866s)
	I1027 23:27:17.240079 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.731569168s)
	I1027 23:27:17.240439 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.304837211s)
	I1027 23:27:17.241092 1372118 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.366626363s)
	I1027 23:27:17.241118 1372118 api_server.go:72] duration metric: took 9.371098403s to wait for apiserver process to appear ...
	I1027 23:27:17.241124 1372118 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:27:17.241138 1372118 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:27:17.243741 1372118 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-790322 addons enable metrics-server
	
	I1027 23:27:17.256320 1372118 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:27:17.257988 1372118 api_server.go:141] control plane version: v1.34.1
	I1027 23:27:17.258012 1372118 api_server.go:131] duration metric: took 16.88182ms to wait for apiserver health ...
	I1027 23:27:17.258022 1372118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:27:17.262230 1372118 system_pods.go:59] 8 kube-system pods found
	I1027 23:27:17.262268 1372118 system_pods.go:61] "coredns-66bc5c9577-7czsv" [2949488f-bf74-4218-b480-955908b58ac0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:17.262278 1372118 system_pods.go:61] "etcd-embed-certs-790322" [592926b2-df2b-407d-8c86-931a4162bdd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:27:17.262284 1372118 system_pods.go:61] "kindnet-l2rcj" [c50bbe3e-12b4-4007-aa20-dfd1b04d38aa] Running
	I1027 23:27:17.262291 1372118 system_pods.go:61] "kube-apiserver-embed-certs-790322" [3839b875-fa30-4534-b042-37b5493241ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:27:17.262299 1372118 system_pods.go:61] "kube-controller-manager-embed-certs-790322" [ebf1417a-4c48-4950-9e6b-85d4856dc0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:27:17.262304 1372118 system_pods.go:61] "kube-proxy-7lwt5" [5d8f2c0d-30b5-487c-9d9e-e7be86b3be39] Running
	I1027 23:27:17.262312 1372118 system_pods.go:61] "kube-scheduler-embed-certs-790322" [cd6b90e4-d691-4163-815e-56ff72e4ba2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:27:17.262325 1372118 system_pods.go:61] "storage-provisioner" [2d42c557-cbb9-445c-8bd8-7b481a959c11] Running
	I1027 23:27:17.262331 1372118 system_pods.go:74] duration metric: took 4.302994ms to wait for pod list to return data ...
	I1027 23:27:17.262339 1372118 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:27:17.264424 1372118 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:27:17.265670 1372118 default_sa.go:45] found service account: "default"
	I1027 23:27:17.265691 1372118 default_sa.go:55] duration metric: took 3.341528ms for default service account to be created ...
	I1027 23:27:17.265700 1372118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:27:17.267823 1372118 addons.go:514] duration metric: took 9.397513282s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:27:17.269731 1372118 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:17.269763 1372118 system_pods.go:89] "coredns-66bc5c9577-7czsv" [2949488f-bf74-4218-b480-955908b58ac0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:17.269773 1372118 system_pods.go:89] "etcd-embed-certs-790322" [592926b2-df2b-407d-8c86-931a4162bdd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:27:17.269807 1372118 system_pods.go:89] "kindnet-l2rcj" [c50bbe3e-12b4-4007-aa20-dfd1b04d38aa] Running
	I1027 23:27:17.269816 1372118 system_pods.go:89] "kube-apiserver-embed-certs-790322" [3839b875-fa30-4534-b042-37b5493241ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:27:17.269827 1372118 system_pods.go:89] "kube-controller-manager-embed-certs-790322" [ebf1417a-4c48-4950-9e6b-85d4856dc0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:27:17.269833 1372118 system_pods.go:89] "kube-proxy-7lwt5" [5d8f2c0d-30b5-487c-9d9e-e7be86b3be39] Running
	I1027 23:27:17.269839 1372118 system_pods.go:89] "kube-scheduler-embed-certs-790322" [cd6b90e4-d691-4163-815e-56ff72e4ba2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:27:17.269844 1372118 system_pods.go:89] "storage-provisioner" [2d42c557-cbb9-445c-8bd8-7b481a959c11] Running
	I1027 23:27:17.269854 1372118 system_pods.go:126] duration metric: took 4.147832ms to wait for k8s-apps to be running ...
	I1027 23:27:17.269890 1372118 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:27:17.269953 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:27:17.285105 1372118 system_svc.go:56] duration metric: took 15.215681ms WaitForService to wait for kubelet
	I1027 23:27:17.285132 1372118 kubeadm.go:587] duration metric: took 9.415111469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:27:17.285152 1372118 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:27:17.288591 1372118 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:27:17.288620 1372118 node_conditions.go:123] node cpu capacity is 2
	I1027 23:27:17.288631 1372118 node_conditions.go:105] duration metric: took 3.474913ms to run NodePressure ...
	I1027 23:27:17.288644 1372118 start.go:242] waiting for startup goroutines ...
	I1027 23:27:17.288651 1372118 start.go:247] waiting for cluster config update ...
	I1027 23:27:17.288662 1372118 start.go:256] writing updated cluster config ...
	I1027 23:27:17.288954 1372118 ssh_runner.go:195] Run: rm -f paused
	I1027 23:27:17.293358 1372118 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:17.297645 1372118 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7czsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:17.069995 1369496 addons.go:514] duration metric: took 2.43947725s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:27:17.445817 1369496 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-336451" context rescaled to 1 replicas
	W1027 23:27:18.944917 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:19.303525 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:21.303757 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:20.944970 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:23.444340 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:25.444545 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:23.303865 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:25.305363 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:27.944636 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:29.945351 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:27.802993 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:29.805442 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:32.303094 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:31.945833 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:34.443546 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:34.303156 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:36.303987 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:36.444401 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:38.945276 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:38.803141 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:40.807249 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:40.946308 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:43.443932 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:45.444057 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:43.304281 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:45.315142 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:47.444601 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:49.944862 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:47.803124 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:49.803899 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:52.302643 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:51.951303 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:54.444066 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:54.303440 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:56.804763 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	I1027 23:27:57.303397 1372118 pod_ready.go:94] pod "coredns-66bc5c9577-7czsv" is "Ready"
	I1027 23:27:57.303428 1372118 pod_ready.go:86] duration metric: took 40.005747477s for pod "coredns-66bc5c9577-7czsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.306074 1372118 pod_ready.go:83] waiting for pod "etcd-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.310979 1372118 pod_ready.go:94] pod "etcd-embed-certs-790322" is "Ready"
	I1027 23:27:57.311008 1372118 pod_ready.go:86] duration metric: took 4.906875ms for pod "etcd-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.313335 1372118 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.317784 1372118 pod_ready.go:94] pod "kube-apiserver-embed-certs-790322" is "Ready"
	I1027 23:27:57.317811 1372118 pod_ready.go:86] duration metric: took 4.447226ms for pod "kube-apiserver-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.320275 1372118 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.501919 1372118 pod_ready.go:94] pod "kube-controller-manager-embed-certs-790322" is "Ready"
	I1027 23:27:57.501951 1372118 pod_ready.go:86] duration metric: took 181.642312ms for pod "kube-controller-manager-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.702272 1372118 pod_ready.go:83] waiting for pod "kube-proxy-7lwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.101593 1372118 pod_ready.go:94] pod "kube-proxy-7lwt5" is "Ready"
	I1027 23:27:58.101632 1372118 pod_ready.go:86] duration metric: took 399.333918ms for pod "kube-proxy-7lwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.302030 1372118 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.702130 1372118 pod_ready.go:94] pod "kube-scheduler-embed-certs-790322" is "Ready"
	I1027 23:27:58.702156 1372118 pod_ready.go:86] duration metric: took 400.098647ms for pod "kube-scheduler-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.702169 1372118 pod_ready.go:40] duration metric: took 41.408773009s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:58.771969 1372118 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:27:58.775340 1372118 out.go:179] * Done! kubectl is now configured to use "embed-certs-790322" cluster and "default" namespace by default
	W1027 23:27:56.944057 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	I1027 23:27:57.453799 1369496 node_ready.go:49] node "default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:57.453832 1369496 node_ready.go:38] duration metric: took 40.512898119s for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:27:57.453846 1369496 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:27:57.453908 1369496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:27:57.472544 1369496 api_server.go:72] duration metric: took 42.842462718s to wait for apiserver process to appear ...
	I1027 23:27:57.472572 1369496 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:27:57.472601 1369496 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1027 23:27:57.481723 1369496 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1027 23:27:57.482839 1369496 api_server.go:141] control plane version: v1.34.1
	I1027 23:27:57.482868 1369496 api_server.go:131] duration metric: took 10.289376ms to wait for apiserver health ...
	I1027 23:27:57.482876 1369496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:27:57.485982 1369496 system_pods.go:59] 8 kube-system pods found
	I1027 23:27:57.486032 1369496 system_pods.go:61] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.486041 1369496 system_pods.go:61] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.486050 1369496 system_pods.go:61] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.486055 1369496 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.486060 1369496 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.486065 1369496 system_pods.go:61] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.486070 1369496 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.486077 1369496 system_pods.go:61] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.486088 1369496 system_pods.go:74] duration metric: took 3.206486ms to wait for pod list to return data ...
	I1027 23:27:57.486097 1369496 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:27:57.488683 1369496 default_sa.go:45] found service account: "default"
	I1027 23:27:57.488755 1369496 default_sa.go:55] duration metric: took 2.651861ms for default service account to be created ...
	I1027 23:27:57.488771 1369496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:27:57.491648 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:57.491685 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.491692 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.491698 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.491705 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.491709 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.491714 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.491718 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.491724 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.491744 1369496 retry.go:31] will retry after 216.8039ms: missing components: kube-dns
	I1027 23:27:57.712499 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:57.712534 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.712541 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.712547 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.712552 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.712556 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.712569 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.712581 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.712591 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.712606 1369496 retry.go:31] will retry after 332.328897ms: missing components: kube-dns
	I1027 23:27:58.048510 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:58.048549 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:58.048555 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:58.048583 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:58.048595 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:58.048600 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:58.048605 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:58.048609 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:58.048621 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:58.048638 1369496 retry.go:31] will retry after 460.922768ms: missing components: kube-dns
	I1027 23:27:58.514497 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:58.514528 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Running
	I1027 23:27:58.514536 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:58.514541 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:58.514568 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:58.514583 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:58.514587 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:58.514591 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:58.514596 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Running
	I1027 23:27:58.514604 1369496 system_pods.go:126] duration metric: took 1.025828047s to wait for k8s-apps to be running ...
	I1027 23:27:58.514615 1369496 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:27:58.514685 1369496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:27:58.527910 1369496 system_svc.go:56] duration metric: took 13.284355ms WaitForService to wait for kubelet
	I1027 23:27:58.527991 1369496 kubeadm.go:587] duration metric: took 43.897912924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:27:58.528022 1369496 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:27:58.530975 1369496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:27:58.531012 1369496 node_conditions.go:123] node cpu capacity is 2
	I1027 23:27:58.531026 1369496 node_conditions.go:105] duration metric: took 2.998065ms to run NodePressure ...
	I1027 23:27:58.531040 1369496 start.go:242] waiting for startup goroutines ...
	I1027 23:27:58.531047 1369496 start.go:247] waiting for cluster config update ...
	I1027 23:27:58.531058 1369496 start.go:256] writing updated cluster config ...
	I1027 23:27:58.531349 1369496 ssh_runner.go:195] Run: rm -f paused
	I1027 23:27:58.535071 1369496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:58.540137 1369496 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.544988 1369496 pod_ready.go:94] pod "coredns-66bc5c9577-lzssb" is "Ready"
	I1027 23:27:58.545018 1369496 pod_ready.go:86] duration metric: took 4.849939ms for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.547774 1369496 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.560603 1369496 pod_ready.go:94] pod "etcd-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.560631 1369496 pod_ready.go:86] duration metric: took 12.829505ms for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.563118 1369496 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.567963 1369496 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.567990 1369496 pod_ready.go:86] duration metric: took 4.84856ms for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.570520 1369496 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.942942 1369496 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.942969 1369496 pod_ready.go:86] duration metric: took 372.417831ms for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.142563 1369496 pod_ready.go:83] waiting for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.540641 1369496 pod_ready.go:94] pod "kube-proxy-n4vzn" is "Ready"
	I1027 23:27:59.540665 1369496 pod_ready.go:86] duration metric: took 398.079189ms for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.741260 1369496 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:28:00.173655 1369496 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-336451" is "Ready"
	I1027 23:28:00.173689 1369496 pod_ready.go:86] duration metric: took 432.399523ms for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:28:00.173703 1369496 pod_ready.go:40] duration metric: took 1.638599587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:28:00.365146 1369496 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:28:00.384228 1369496 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-336451" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.171484375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=49cf5575-b3a4-40bf-b4ec-133995f8b132 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.172591418Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=854eafd1-47e5-4ea1-bd7b-f5b53d1d0538 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.172825145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.182788715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.183002313Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/96c53e68e39339af13f58632ff1639a1ff1909423528c4a4435b3b9d12dfd59c/merged/etc/passwd: no such file or directory"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.183026708Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/96c53e68e39339af13f58632ff1639a1ff1909423528c4a4435b3b9d12dfd59c/merged/etc/group: no such file or directory"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.183292492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.201008343Z" level=info msg="Created container 685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479: kube-system/storage-provisioner/storage-provisioner" id=854eafd1-47e5-4ea1-bd7b-f5b53d1d0538 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.201906743Z" level=info msg="Starting container: 685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479" id=f4e1fe5a-ac75-40fe-a18c-ed73938b2b06 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.206003974Z" level=info msg="Started container" PID=1648 containerID=685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479 description=kube-system/storage-provisioner/storage-provisioner id=f4e1fe5a-ac75-40fe-a18c-ed73938b2b06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af0e082e7fe94b2dc2398c07663ed9cefad54bc74363d57c46545dfecb63d66b
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.745763341Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.74973129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.749766457Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.7497952Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.753472823Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.753509591Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.753529644Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.756955793Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.756989779Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.757013049Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.760380358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.760415838Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.760443793Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.763889084Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.763924826Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	685f12b4b12a0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   af0e082e7fe94       storage-provisioner                          kube-system
	54aca756edf6b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago       Exited              dashboard-metrics-scraper   2                   d4a8b3957a9dd       dashboard-metrics-scraper-6ffb444bf9-57wqx   kubernetes-dashboard
	b97f21439a7b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   2bad7d37d6aac       kubernetes-dashboard-855c9754f9-m4ssq        kubernetes-dashboard
	e95ec2573027c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   b8c75b476bbbd       busybox                                      default
	7cb3f092409e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   ad8e624e74350       kube-proxy-7lwt5                             kube-system
	81dc02aac9076       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   af0e082e7fe94       storage-provisioner                          kube-system
	dd862bc0975c4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   486302e90a231       coredns-66bc5c9577-7czsv                     kube-system
	a25501fea7b4d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   9f84a593e81d2       kindnet-l2rcj                                kube-system
	99cfb8a94d79f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   915d34822b240       kube-apiserver-embed-certs-790322            kube-system
	2dd33085839f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c455a9e029f55       kube-scheduler-embed-certs-790322            kube-system
	04d779de2ba59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6d3d8c2179fdd       etcd-embed-certs-790322                      kube-system
	4cca3101ea453       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   d7a169a88a7e9       kube-controller-manager-embed-certs-790322   kube-system
	
	
	==> coredns [dd862bc0975c47b020906fd67965252737767357cd14270fa3ebcf0e580227ec] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45836 - 52467 "HINFO IN 7358259606901163704.6601914373597417710. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025491328s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-790322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-790322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=embed-certs-790322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_25_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:25:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-790322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:28:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:26:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-790322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                303b75c8-bfe7-43fd-a2ff-1f7c0bfb24ff
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-7czsv                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-embed-certs-790322                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-l2rcj                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-790322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-embed-certs-790322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-7lwt5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-790322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-57wqx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m4ssq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m44s (x8 over 2m44s)  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m27s                  node-controller  Node embed-certs-790322 event: Registered Node embed-certs-790322 in Controller
	  Normal   NodeReady                105s                   kubelet          Node embed-certs-790322 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node embed-certs-790322 event: Registered Node embed-certs-790322 in Controller
	
	
	==> dmesg <==
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248] <==
	{"level":"warn","ts":"2025-10-27T23:27:11.750633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.796987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.823257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.854584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.889808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.923831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.959318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.027426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.078927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.135953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.177567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.218449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.274274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.292874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.319980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.336639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.354479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.375526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.395082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.419404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.486459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.519235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.543412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.568093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.634895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44422","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:28:14 up  6:10,  0 user,  load average: 3.39, 3.95, 3.39
	Linux embed-certs-790322 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a25501fea7b4d9ca522fa06ad5ad513cb99d9c3bdc51bc7296798233ca0230d1] <==
	I1027 23:27:15.456899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:27:15.463893       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:27:15.464026       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:27:15.464038       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:27:15.464049       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:27:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:27:15.741577       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:27:15.741597       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:27:15.741606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:27:15.741936       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:27:45.741645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:27:45.741645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:27:45.742573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:27:45.742573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1027 23:27:47.341737       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:27:47.341767       1 metrics.go:72] Registering metrics
	I1027 23:27:47.341840       1 controller.go:711] "Syncing nftables rules"
	I1027 23:27:55.745409       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:27:55.745465       1 main.go:301] handling current node
	I1027 23:28:05.747704       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:28:05.747742       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99cfb8a94d79f6c5bfe51cd7b6b319af3c0441589946869eae5fa78fc69cdf42] <==
	I1027 23:27:14.037369       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:27:14.037410       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:27:14.067468       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:27:14.074812       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:27:14.075203       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 23:27:14.090316       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:27:14.271380       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 23:27:14.271448       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:27:14.271785       1 aggregator.go:171] initial CRD sync complete...
	I1027 23:27:14.271802       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 23:27:14.271809       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:27:14.271816       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:27:14.278618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1027 23:27:14.320464       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:27:14.548139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:27:14.883316       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:27:16.066212       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:27:16.321853       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:27:16.447197       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:27:16.509497       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:27:16.857379       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.92.227"}
	I1027 23:27:16.922770       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.143.189"}
	I1027 23:27:19.501807       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:27:19.552309       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:27:19.601804       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109] <==
	I1027 23:27:19.123418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:27:19.136592       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:27:19.139841       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:27:19.139964       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:27:19.140233       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-790322"
	I1027 23:27:19.140288       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 23:27:19.146571       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 23:27:19.146619       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:27:19.146669       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 23:27:19.146732       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 23:27:19.146829       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:27:19.146845       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:27:19.146852       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:27:19.146906       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 23:27:19.146925       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:27:19.146573       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 23:27:19.148782       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:27:19.152740       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:27:19.156800       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:27:19.153047       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 23:27:19.157735       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:27:19.153061       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 23:27:19.159719       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:27:19.159869       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:27:19.163151       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [7cb3f092409e678570d4a74471cfdaa27f1dffbc700779b3a9bb259a5c2669ab] <==
	I1027 23:27:16.482972       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:27:16.733898       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:27:16.843408       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:27:16.843440       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 23:27:16.843519       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:27:16.963414       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:27:16.963470       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:27:16.967423       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:27:16.967682       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:27:16.967696       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:27:16.974484       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:27:16.974511       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:27:16.974821       1 config.go:200] "Starting service config controller"
	I1027 23:27:16.974828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:27:16.975143       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:27:16.975150       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:27:16.975513       1 config.go:309] "Starting node config controller"
	I1027 23:27:16.975520       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:27:16.975526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:27:17.075317       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:27:17.075352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 23:27:17.075403       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c] <==
	I1027 23:27:11.673602       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:27:14.404940       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:27:14.404971       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:27:14.428645       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:27:14.428752       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:27:14.428769       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:27:14.428790       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:27:14.450610       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:27:14.450645       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:27:14.450666       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:27:14.450676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:27:14.531374       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:27:14.551711       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:27:14.551832       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:27:19 embed-certs-790322 kubelet[779]: I1027 23:27:19.808698     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nr8k\" (UniqueName: \"kubernetes.io/projected/88b8fc67-6604-45fe-b0d8-30629563166a-kube-api-access-6nr8k\") pod \"dashboard-metrics-scraper-6ffb444bf9-57wqx\" (UID: \"88b8fc67-6604-45fe-b0d8-30629563166a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx"
	Oct 27 23:27:19 embed-certs-790322 kubelet[779]: I1027 23:27:19.908921     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/00ed63f7-8d59-4ed6-84ce-e3dc2e39663d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m4ssq\" (UID: \"00ed63f7-8d59-4ed6-84ce-e3dc2e39663d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m4ssq"
	Oct 27 23:27:19 embed-certs-790322 kubelet[779]: I1027 23:27:19.908992     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6224z\" (UniqueName: \"kubernetes.io/projected/00ed63f7-8d59-4ed6-84ce-e3dc2e39663d-kube-api-access-6224z\") pod \"kubernetes-dashboard-855c9754f9-m4ssq\" (UID: \"00ed63f7-8d59-4ed6-84ce-e3dc2e39663d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m4ssq"
	Oct 27 23:27:20 embed-certs-790322 kubelet[779]: W1027 23:27:20.345795     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/crio-2bad7d37d6aacac6f37bded35fb1ab3519d753073ce6a903bfc9104b91dfe1e2 WatchSource:0}: Error finding container 2bad7d37d6aacac6f37bded35fb1ab3519d753073ce6a903bfc9104b91dfe1e2: Status 404 returned error can't find the container with id 2bad7d37d6aacac6f37bded35fb1ab3519d753073ce6a903bfc9104b91dfe1e2
	Oct 27 23:27:24 embed-certs-790322 kubelet[779]: I1027 23:27:24.089171     779 scope.go:117] "RemoveContainer" containerID="08789c2214c0b55112414297af534a052e12d73ffd34eab97a628dd133b052dd"
	Oct 27 23:27:25 embed-certs-790322 kubelet[779]: I1027 23:27:25.095930     779 scope.go:117] "RemoveContainer" containerID="08789c2214c0b55112414297af534a052e12d73ffd34eab97a628dd133b052dd"
	Oct 27 23:27:25 embed-certs-790322 kubelet[779]: I1027 23:27:25.096279     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:25 embed-certs-790322 kubelet[779]: E1027 23:27:25.096440     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:26 embed-certs-790322 kubelet[779]: I1027 23:27:26.103492     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:26 embed-certs-790322 kubelet[779]: E1027 23:27:26.103693     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:30 embed-certs-790322 kubelet[779]: I1027 23:27:30.019202     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:30 embed-certs-790322 kubelet[779]: E1027 23:27:30.019426     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:44 embed-certs-790322 kubelet[779]: I1027 23:27:44.820593     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: I1027 23:27:45.164961     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: I1027 23:27:45.165298     779 scope.go:117] "RemoveContainer" containerID="54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: E1027 23:27:45.165458     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: I1027 23:27:45.219592     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m4ssq" podStartSLOduration=17.708399901 podStartE2EDuration="26.219572635s" podCreationTimestamp="2025-10-27 23:27:19 +0000 UTC" firstStartedPulling="2025-10-27 23:27:20.350219964 +0000 UTC m=+13.872354878" lastFinishedPulling="2025-10-27 23:27:28.861392698 +0000 UTC m=+22.383527612" observedRunningTime="2025-10-27 23:27:29.132712539 +0000 UTC m=+22.654847461" watchObservedRunningTime="2025-10-27 23:27:45.219572635 +0000 UTC m=+38.741707549"
	Oct 27 23:27:46 embed-certs-790322 kubelet[779]: I1027 23:27:46.169579     779 scope.go:117] "RemoveContainer" containerID="81dc02aac9076639d9e778fbd45c09fa3c0cf603955a2ad1a2dad43abd3483e3"
	Oct 27 23:27:50 embed-certs-790322 kubelet[779]: I1027 23:27:50.012351     779 scope.go:117] "RemoveContainer" containerID="54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	Oct 27 23:27:50 embed-certs-790322 kubelet[779]: E1027 23:27:50.012565     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:28:04 embed-certs-790322 kubelet[779]: I1027 23:28:04.820767     779 scope.go:117] "RemoveContainer" containerID="54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	Oct 27 23:28:04 embed-certs-790322 kubelet[779]: E1027 23:28:04.821479     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:28:11 embed-certs-790322 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:28:11 embed-certs-790322 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:28:11 embed-certs-790322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b97f21439a7b96012b6e8dfefc7cdd720fd915384d907a5cf119f81e99ecad9c] <==
	2025/10/27 23:27:28 Starting overwatch
	2025/10/27 23:27:28 Using namespace: kubernetes-dashboard
	2025/10/27 23:27:28 Using in-cluster config to connect to apiserver
	2025/10/27 23:27:28 Using secret token for csrf signing
	2025/10/27 23:27:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:27:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:27:28 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 23:27:28 Generating JWE encryption key
	2025/10/27 23:27:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:27:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:27:29 Initializing JWE encryption key from synchronized object
	2025/10/27 23:27:29 Creating in-cluster Sidecar client
	2025/10/27 23:27:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:27:29 Serving insecurely on HTTP port: 9090
	2025/10/27 23:27:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479] <==
	I1027 23:27:46.231689       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:27:46.231814       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:27:46.233873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:49.689465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:53.950122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:57.549091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:00.604728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.627264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.632894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:28:03.633129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:28:03.633344       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-790322_4dc5a071-4fab-4d1f-bf5b-806aa5d8a4a0!
	I1027 23:28:03.633555       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe00f650-32eb-4f9d-b262-03caa020ad86", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-790322_4dc5a071-4fab-4d1f-bf5b-806aa5d8a4a0 became leader
	W1027 23:28:03.642251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.645945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:28:03.733605       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-790322_4dc5a071-4fab-4d1f-bf5b-806aa5d8a4a0!
	W1027 23:28:05.648723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:05.655727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:07.660350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:07.665594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:09.668847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:09.674524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:11.681143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:11.700881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:13.703760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:13.708446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [81dc02aac9076639d9e778fbd45c09fa3c0cf603955a2ad1a2dad43abd3483e3] <==
	I1027 23:27:15.989732       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:27:46.018952       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790322 -n embed-certs-790322
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790322 -n embed-certs-790322: exit status 2 (400.62954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-790322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-790322
E1027 23:28:15.482213 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect embed-certs-790322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88",
	        "Created": "2025-10-27T23:25:09.592548844Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1372248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:26:58.024355998Z",
	            "FinishedAt": "2025-10-27T23:26:56.962967944Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/hosts",
	        "LogPath": "/var/lib/docker/containers/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88-json.log",
	        "Name": "/embed-certs-790322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-790322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-790322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88",
	                "LowerDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ae6e33e0abf8cb5abe216433ff774e2094abeb181f625d12b51874bce8486b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-790322",
	                "Source": "/var/lib/docker/volumes/embed-certs-790322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-790322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-790322",
	                "name.minikube.sigs.k8s.io": "embed-certs-790322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9c6a10432ae92d29bcf105db510e223adf32a22224e6daa6ddc959e54a6a67d",
	            "SandboxKey": "/var/run/docker/netns/b9c6a10432ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34589"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34590"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-790322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:89:b9:19:98:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "49c1672ada24cf39a040b77c54572c8441994ff7afeb8ca5778d5d7aaf9fecd8",
	                    "EndpointID": "eefec1e90bffcb5fd648cbac499815ab57f6148fa11712e40f6b5acd6db02f95",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-790322",
	                        "f2a16ed0b5f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322: exit status 2 (402.383386ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-790322 logs -n 25: (1.302907309s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:25 UTC │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ stop    │ -p no-preload-947754 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-336451 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:26:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:26:57.629666 1372118 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:26:57.630326 1372118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:57.630364 1372118 out.go:374] Setting ErrFile to fd 2...
	I1027 23:26:57.630435 1372118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:57.630762 1372118 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:26:57.631216 1372118 out.go:368] Setting JSON to false
	I1027 23:26:57.632240 1372118 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22167,"bootTime":1761585451,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:26:57.632349 1372118 start.go:143] virtualization:  
	I1027 23:26:57.635499 1372118 out.go:179] * [embed-certs-790322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:26:57.639638 1372118 notify.go:221] Checking for updates...
	I1027 23:26:57.640621 1372118 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:26:57.646013 1372118 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:26:57.649169 1372118 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:26:57.652247 1372118 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:26:57.655512 1372118 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:26:57.658358 1372118 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:26:57.661854 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:26:57.662570 1372118 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:26:57.719881 1372118 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:26:57.719979 1372118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:57.816133 1372118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:26:57.801869037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:57.816234 1372118 docker.go:318] overlay module found
	I1027 23:26:57.819654 1372118 out.go:179] * Using the docker driver based on existing profile
	I1027 23:26:57.822419 1372118 start.go:307] selected driver: docker
	I1027 23:26:57.822435 1372118 start.go:928] validating driver "docker" against &{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:26:57.822557 1372118 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:26:57.823249 1372118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:57.911780 1372118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:26:57.902033646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:57.912102 1372118 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:26:57.912132 1372118 cni.go:84] Creating CNI manager for ""
	I1027 23:26:57.912183 1372118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:26:57.912218 1372118 start.go:351] cluster config:
	{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:26:57.915350 1372118 out.go:179] * Starting "embed-certs-790322" primary control-plane node in "embed-certs-790322" cluster
	I1027 23:26:57.918215 1372118 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:26:57.921146 1372118 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:26:57.923980 1372118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:26:57.924038 1372118 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:26:57.924062 1372118 cache.go:59] Caching tarball of preloaded images
	I1027 23:26:57.924148 1372118 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:26:57.924157 1372118 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:26:57.924286 1372118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json ...
	I1027 23:26:57.924490 1372118 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:26:57.946720 1372118 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:26:57.946741 1372118 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:26:57.946755 1372118 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:26:57.946778 1372118 start.go:360] acquireMachinesLock for embed-certs-790322: {Name:mk0a741ca206e2e37bd9112a34c7fc5ed8359e78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:26:57.946830 1372118 start.go:364] duration metric: took 33.239µs to acquireMachinesLock for "embed-certs-790322"
	I1027 23:26:57.946849 1372118 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:26:57.946854 1372118 fix.go:55] fixHost starting: 
	I1027 23:26:57.947100 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:26:57.980727 1372118 fix.go:113] recreateIfNeeded on embed-certs-790322: state=Stopped err=<nil>
	W1027 23:26:57.980756 1372118 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:26:56.025667 1369496 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:26:56.026130 1369496 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:26:56.477016 1369496 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:26:56.671259 1369496 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:26:57.762794 1369496 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:26:58.081211 1369496 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:26:58.805554 1369496 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:26:58.808233 1369496 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:26:58.825117 1369496 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:26:58.828793 1369496 out.go:252]   - Booting up control plane ...
	I1027 23:26:58.828915 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:26:58.840658 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:26:58.842136 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:26:58.864049 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:26:58.864187 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:26:58.873660 1369496 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:26:58.874262 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:26:58.874539 1369496 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:26:59.080521 1369496 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:26:59.080651 1369496 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:27:00.581426 1369496 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501339765s
	I1027 23:27:00.584884 1369496 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:27:00.584976 1369496 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1027 23:27:00.585295 1369496 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:27:00.585396 1369496 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:26:57.983904 1372118 out.go:252] * Restarting existing docker container for "embed-certs-790322" ...
	I1027 23:26:57.983987 1372118 cli_runner.go:164] Run: docker start embed-certs-790322
	I1027 23:26:58.327945 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:26:58.366280 1372118 kic.go:430] container "embed-certs-790322" state is running.
	I1027 23:26:58.367082 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:26:58.400611 1372118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json ...
	I1027 23:26:58.400861 1372118 machine.go:94] provisionDockerMachine start ...
	I1027 23:26:58.400931 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:26:58.426994 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:26:58.427322 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:26:58.427331 1372118 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:26:58.428275 1372118 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50640->127.0.0.1:34589: read: connection reset by peer
	I1027 23:27:01.622790 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790322
	
	I1027 23:27:01.622827 1372118 ubuntu.go:182] provisioning hostname "embed-certs-790322"
	I1027 23:27:01.622918 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:01.668222 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:01.668540 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:01.668557 1372118 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-790322 && echo "embed-certs-790322" | sudo tee /etc/hostname
	I1027 23:27:01.880089 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790322
	
	I1027 23:27:01.880214 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:01.914678 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:01.914993 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:01.915017 1372118 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-790322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-790322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-790322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:27:02.100016 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:27:02.100086 1372118 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:27:02.100146 1372118 ubuntu.go:190] setting up certificates
	I1027 23:27:02.100174 1372118 provision.go:84] configureAuth start
	I1027 23:27:02.100252 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:27:02.126984 1372118 provision.go:143] copyHostCerts
	I1027 23:27:02.127050 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:27:02.127065 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:27:02.127143 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:27:02.127251 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:27:02.127257 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:27:02.127282 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:27:02.127340 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:27:02.127344 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:27:02.127366 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:27:02.127412 1372118 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.embed-certs-790322 san=[127.0.0.1 192.168.85.2 embed-certs-790322 localhost minikube]
	I1027 23:27:03.574875 1369496 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.98960737s
	I1027 23:27:02.724924 1372118 provision.go:177] copyRemoteCerts
	I1027 23:27:02.725053 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:27:02.725125 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:02.742703 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:02.855688 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:27:02.901503 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:27:02.931477 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:27:02.967998 1372118 provision.go:87] duration metric: took 867.785329ms to configureAuth
	I1027 23:27:02.968070 1372118 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:27:02.968305 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:02.968463 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:02.996153 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:02.996460 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:02.996478 1372118 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:27:03.467739 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:27:03.467809 1372118 machine.go:97] duration metric: took 5.066930053s to provisionDockerMachine
	I1027 23:27:03.467856 1372118 start.go:293] postStartSetup for "embed-certs-790322" (driver="docker")
	I1027 23:27:03.467893 1372118 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:27:03.467987 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:27:03.468071 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.493180 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.623500 1372118 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:27:03.627633 1372118 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:27:03.627671 1372118 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:27:03.627684 1372118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:27:03.627749 1372118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:27:03.627833 1372118 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:27:03.627947 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:27:03.644048 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:27:03.666091 1372118 start.go:296] duration metric: took 198.192776ms for postStartSetup
	I1027 23:27:03.666182 1372118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:27:03.666245 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.682357 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.791652 1372118 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:27:03.798570 1372118 fix.go:57] duration metric: took 5.851708801s for fixHost
	I1027 23:27:03.798605 1372118 start.go:83] releasing machines lock for "embed-certs-790322", held for 5.851767157s
	I1027 23:27:03.798684 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:27:03.828892 1372118 ssh_runner.go:195] Run: cat /version.json
	I1027 23:27:03.828957 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.829216 1372118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:27:03.829280 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.879957 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.888974 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:04.102180 1372118 ssh_runner.go:195] Run: systemctl --version
	I1027 23:27:04.115296 1372118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:27:04.181664 1372118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:27:04.191270 1372118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:27:04.191392 1372118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:27:04.204722 1372118 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:27:04.204802 1372118 start.go:496] detecting cgroup driver to use...
	I1027 23:27:04.204849 1372118 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:27:04.204926 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:27:04.220880 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:27:04.240791 1372118 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:27:04.240899 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:27:04.258648 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:27:04.286284 1372118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:27:04.454855 1372118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:27:04.644920 1372118 docker.go:234] disabling docker service ...
	I1027 23:27:04.645058 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:27:04.660850 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:27:04.675695 1372118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:27:04.868099 1372118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:27:05.063828 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:27:05.082647 1372118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:27:05.107749 1372118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:27:05.107822 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.121233 1372118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:27:05.121307 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.143748 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.160586 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.179086 1372118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:27:05.191735 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.207415 1372118 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.218949 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.235732 1372118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:27:05.248461 1372118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:27:05.264882 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:05.462697 1372118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:27:05.711167 1372118 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:27:05.711239 1372118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:27:05.715341 1372118 start.go:564] Will wait 60s for crictl version
	I1027 23:27:05.715407 1372118 ssh_runner.go:195] Run: which crictl
	I1027 23:27:05.718946 1372118 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:27:05.766824 1372118 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:27:05.766910 1372118 ssh_runner.go:195] Run: crio --version
	I1027 23:27:05.820172 1372118 ssh_runner.go:195] Run: crio --version
	I1027 23:27:05.871373 1372118 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:27:05.874464 1372118 cli_runner.go:164] Run: docker network inspect embed-certs-790322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:27:05.904076 1372118 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:27:05.908444 1372118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:27:05.923731 1372118 kubeadm.go:884] updating cluster {Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:27:05.923843 1372118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:27:05.923904 1372118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:27:06.009813 1372118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:27:06.009911 1372118 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:27:06.010028 1372118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:27:06.059961 1372118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:27:06.059987 1372118 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:27:06.059996 1372118 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:27:06.060099 1372118 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-790322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:27:06.060192 1372118 ssh_runner.go:195] Run: crio config
	I1027 23:27:06.181535 1372118 cni.go:84] Creating CNI manager for ""
	I1027 23:27:06.181558 1372118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:27:06.181577 1372118 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:27:06.181600 1372118 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-790322 NodeName:embed-certs-790322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:27:06.181732 1372118 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-790322"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:27:06.181812 1372118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:27:06.192912 1372118 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:27:06.192995 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:27:06.203308 1372118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 23:27:06.218584 1372118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:27:06.232422 1372118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 23:27:06.247296 1372118 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:27:06.251492 1372118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:27:06.261925 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:06.457092 1372118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:06.478856 1372118 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322 for IP: 192.168.85.2
	I1027 23:27:06.478875 1372118 certs.go:195] generating shared ca certs ...
	I1027 23:27:06.478891 1372118 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:06.479031 1372118 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:27:06.479080 1372118 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:27:06.479090 1372118 certs.go:257] generating profile certs ...
	I1027 23:27:06.479179 1372118 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/client.key
	I1027 23:27:06.479248 1372118 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.key.f07237cc
	I1027 23:27:06.479292 1372118 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.key
	I1027 23:27:06.479402 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:27:06.479436 1372118 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:27:06.479448 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:27:06.479471 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:27:06.479496 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:27:06.479722 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:27:06.479825 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:27:06.480838 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:27:06.546023 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:27:06.590814 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:27:06.650028 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:27:06.677604 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 23:27:06.733526 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:27:06.770512 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:27:06.794546 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:27:06.817673 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:27:06.845792 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:27:06.874996 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:27:06.907763 1372118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:27:06.939835 1372118 ssh_runner.go:195] Run: openssl version
	I1027 23:27:06.947898 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:27:06.961316 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:27:06.967846 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:27:06.967971 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:27:07.018751 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:27:07.027283 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:27:07.035876 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.040843 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.040991 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.085555 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:27:07.094489 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:27:07.103537 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.108009 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.108154 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.150730 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:27:07.160134 1372118 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:27:07.164988 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:27:07.214638 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:27:07.268298 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:27:07.344572 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:27:07.414155 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:27:07.508607 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:27:07.566964 1372118 kubeadm.go:401] StartCluster: {Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:27:07.567056 1372118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:27:07.567131 1372118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:27:07.721596 1372118 cri.go:89] found id: "2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c"
	I1027 23:27:07.721621 1372118 cri.go:89] found id: "04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248"
	I1027 23:27:07.721626 1372118 cri.go:89] found id: "4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109"
	I1027 23:27:07.721636 1372118 cri.go:89] found id: ""
	I1027 23:27:07.721689 1372118 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:27:07.809334 1372118 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:27:07Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:27:07.809421 1372118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:27:07.830014 1372118 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:27:07.830034 1372118 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:27:07.830105 1372118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:27:07.845122 1372118 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:27:07.845557 1372118 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-790322" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:07.845661 1372118 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-790322" cluster setting kubeconfig missing "embed-certs-790322" context setting]
	I1027 23:27:07.845942 1372118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.847319 1372118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:27:07.868638 1372118 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:27:07.868673 1372118 kubeadm.go:602] duration metric: took 38.632535ms to restartPrimaryControlPlane
	I1027 23:27:07.868682 1372118 kubeadm.go:403] duration metric: took 301.730067ms to StartCluster
	I1027 23:27:07.868697 1372118 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.868756 1372118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:07.869767 1372118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.869989 1372118 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:27:07.870257 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:07.870306 1372118 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:27:07.870424 1372118 addons.go:69] Setting dashboard=true in profile "embed-certs-790322"
	I1027 23:27:07.870374 1372118 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-790322"
	I1027 23:27:07.870449 1372118 addons.go:238] Setting addon dashboard=true in "embed-certs-790322"
	I1027 23:27:07.870456 1372118 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-790322"
	W1027 23:27:07.870457 1372118 addons.go:247] addon dashboard should already be in state true
	W1027 23:27:07.870462 1372118 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:27:07.870482 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.870485 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.870932 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.870947 1372118 addons.go:69] Setting default-storageclass=true in profile "embed-certs-790322"
	I1027 23:27:07.870960 1372118 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790322"
	I1027 23:27:07.871199 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.870934 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.874327 1372118 out.go:179] * Verifying Kubernetes components...
	I1027 23:27:07.877483 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:07.921642 1372118 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:27:07.923871 1372118 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:07.923902 1372118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:27:07.923973 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:07.926608 1372118 addons.go:238] Setting addon default-storageclass=true in "embed-certs-790322"
	W1027 23:27:07.926636 1372118 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:27:07.926662 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.927094 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.930680 1372118 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:27:07.934972 1372118 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:27:07.589168 1369496 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.003295676s
	I1027 23:27:08.586654 1369496 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.00161344s
	I1027 23:27:08.617820 1369496 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:27:08.651361 1369496 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:27:08.672815 1369496 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:27:08.673024 1369496 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-336451 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:27:08.695558 1369496 kubeadm.go:319] [bootstrap-token] Using token: j9lm8r.7dur7mpnl819twae
	I1027 23:27:08.698544 1369496 out.go:252]   - Configuring RBAC rules ...
	I1027 23:27:08.698661 1369496 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:27:08.705744 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:27:08.723693 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:27:08.731147 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:27:08.736342 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:27:08.745908 1369496 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:27:09.017778 1369496 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:27:09.574635 1369496 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:27:09.998756 1369496 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:27:10.000172 1369496 kubeadm.go:319] 
	I1027 23:27:10.000265 1369496 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:27:10.000277 1369496 kubeadm.go:319] 
	I1027 23:27:10.000361 1369496 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:27:10.000371 1369496 kubeadm.go:319] 
	I1027 23:27:10.000398 1369496 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:27:10.000892 1369496 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:27:10.000961 1369496 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:27:10.000971 1369496 kubeadm.go:319] 
	I1027 23:27:10.001030 1369496 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:27:10.001039 1369496 kubeadm.go:319] 
	I1027 23:27:10.001091 1369496 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:27:10.001099 1369496 kubeadm.go:319] 
	I1027 23:27:10.001163 1369496 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:27:10.001249 1369496 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:27:10.001327 1369496 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:27:10.001335 1369496 kubeadm.go:319] 
	I1027 23:27:10.001629 1369496 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:27:10.001721 1369496 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:27:10.001731 1369496 kubeadm.go:319] 
	I1027 23:27:10.002145 1369496 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token j9lm8r.7dur7mpnl819twae \
	I1027 23:27:10.002273 1369496 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:27:10.002492 1369496 kubeadm.go:319] 	--control-plane 
	I1027 23:27:10.002509 1369496 kubeadm.go:319] 
	I1027 23:27:10.002795 1369496 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:27:10.002815 1369496 kubeadm.go:319] 
	I1027 23:27:10.003080 1369496 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token j9lm8r.7dur7mpnl819twae \
	I1027 23:27:10.003401 1369496 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:27:10.009000 1369496 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:27:10.009283 1369496 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:27:10.009410 1369496 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:27:10.009468 1369496 cni.go:84] Creating CNI manager for ""
	I1027 23:27:10.009482 1369496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:27:10.013092 1369496 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 23:27:10.016073 1369496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:27:10.032899 1369496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:27:10.032926 1369496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:27:10.084560 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:27:10.555414 1369496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:27:10.555538 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:10.555613 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-336451 minikube.k8s.io/updated_at=2025_10_27T23_27_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=default-k8s-diff-port-336451 minikube.k8s.io/primary=true
	I1027 23:27:07.942570 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:27:07.942597 1372118 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:27:07.942676 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:07.970507 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:07.977164 1372118 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:07.977185 1372118 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:27:07.977247 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:08.010762 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:08.030543 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:08.342954 1372118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:08.363752 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:08.405304 1372118 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:27:08.479620 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:27:08.479646 1372118 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:27:08.508486 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:08.515674 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:27:08.515702 1372118 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:27:08.610848 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:27:08.610914 1372118 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:27:08.743517 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:27:08.743586 1372118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:27:08.814050 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:27:08.814117 1372118 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:27:08.837148 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:27:08.837221 1372118 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:27:08.859763 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:27:08.859839 1372118 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:27:08.880028 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:27:08.880102 1372118 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:27:08.907564 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:27:08.907638 1372118 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:27:08.935516 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:27:10.876897 1369496 ops.go:34] apiserver oom_adj: -16
	I1027 23:27:10.876997 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:11.377135 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:11.877315 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:12.377098 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:12.877634 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:13.377806 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:13.877368 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:14.378067 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:14.628184 1369496 kubeadm.go:1114] duration metric: took 4.072679138s to wait for elevateKubeSystemPrivileges
	I1027 23:27:14.628211 1369496 kubeadm.go:403] duration metric: took 22.864632047s to StartCluster
	I1027 23:27:14.628228 1369496 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:14.628287 1369496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:14.629803 1369496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:14.630050 1369496 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:27:14.630138 1369496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:27:14.630441 1369496 config.go:182] Loaded profile config "default-k8s-diff-port-336451": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:14.630483 1369496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:27:14.630541 1369496 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-336451"
	I1027 23:27:14.630555 1369496 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-336451"
	I1027 23:27:14.630575 1369496 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:27:14.631062 1369496 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-336451"
	I1027 23:27:14.631080 1369496 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-336451"
	I1027 23:27:14.631353 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.631693 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.635148 1369496 out.go:179] * Verifying Kubernetes components...
	I1027 23:27:14.638515 1369496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:14.668067 1369496 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-336451"
	I1027 23:27:14.668115 1369496 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:27:14.668539 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.675228 1369496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:27:14.680124 1369496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:14.680150 1369496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:27:14.680213 1369496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:27:14.704695 1369496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:14.704721 1369496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:27:14.704784 1369496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:27:14.731557 1369496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34584 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:27:14.742439 1369496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34584 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:27:15.224704 1369496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:15.318545 1369496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:15.390982 1369496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:27:15.391153 1369496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:16.939430 1369496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.714694755s)
	I1027 23:27:16.939476 1369496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.620913736s)
	I1027 23:27:16.939769 1369496 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.548578836s)
	I1027 23:27:16.940917 1369496 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:27:16.941165 1369496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.550112241s)
	I1027 23:27:16.941180 1369496 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 23:27:17.067100 1369496 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:27:13.874223 1372118 node_ready.go:49] node "embed-certs-790322" is "Ready"
	I1027 23:27:13.874298 1372118 node_ready.go:38] duration metric: took 5.468960816s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:27:13.874327 1372118 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:27:13.874432 1372118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:27:17.240012 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.876173866s)
	I1027 23:27:17.240079 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.731569168s)
	I1027 23:27:17.240439 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.304837211s)
	I1027 23:27:17.241092 1372118 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.366626363s)
	I1027 23:27:17.241118 1372118 api_server.go:72] duration metric: took 9.371098403s to wait for apiserver process to appear ...
	I1027 23:27:17.241124 1372118 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:27:17.241138 1372118 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:27:17.243741 1372118 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-790322 addons enable metrics-server
	
	I1027 23:27:17.256320 1372118 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:27:17.257988 1372118 api_server.go:141] control plane version: v1.34.1
	I1027 23:27:17.258012 1372118 api_server.go:131] duration metric: took 16.88182ms to wait for apiserver health ...
	I1027 23:27:17.258022 1372118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:27:17.262230 1372118 system_pods.go:59] 8 kube-system pods found
	I1027 23:27:17.262268 1372118 system_pods.go:61] "coredns-66bc5c9577-7czsv" [2949488f-bf74-4218-b480-955908b58ac0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:17.262278 1372118 system_pods.go:61] "etcd-embed-certs-790322" [592926b2-df2b-407d-8c86-931a4162bdd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:27:17.262284 1372118 system_pods.go:61] "kindnet-l2rcj" [c50bbe3e-12b4-4007-aa20-dfd1b04d38aa] Running
	I1027 23:27:17.262291 1372118 system_pods.go:61] "kube-apiserver-embed-certs-790322" [3839b875-fa30-4534-b042-37b5493241ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:27:17.262299 1372118 system_pods.go:61] "kube-controller-manager-embed-certs-790322" [ebf1417a-4c48-4950-9e6b-85d4856dc0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:27:17.262304 1372118 system_pods.go:61] "kube-proxy-7lwt5" [5d8f2c0d-30b5-487c-9d9e-e7be86b3be39] Running
	I1027 23:27:17.262312 1372118 system_pods.go:61] "kube-scheduler-embed-certs-790322" [cd6b90e4-d691-4163-815e-56ff72e4ba2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:27:17.262325 1372118 system_pods.go:61] "storage-provisioner" [2d42c557-cbb9-445c-8bd8-7b481a959c11] Running
	I1027 23:27:17.262331 1372118 system_pods.go:74] duration metric: took 4.302994ms to wait for pod list to return data ...
	I1027 23:27:17.262339 1372118 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:27:17.264424 1372118 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:27:17.265670 1372118 default_sa.go:45] found service account: "default"
	I1027 23:27:17.265691 1372118 default_sa.go:55] duration metric: took 3.341528ms for default service account to be created ...
	I1027 23:27:17.265700 1372118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:27:17.267823 1372118 addons.go:514] duration metric: took 9.397513282s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:27:17.269731 1372118 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:17.269763 1372118 system_pods.go:89] "coredns-66bc5c9577-7czsv" [2949488f-bf74-4218-b480-955908b58ac0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:17.269773 1372118 system_pods.go:89] "etcd-embed-certs-790322" [592926b2-df2b-407d-8c86-931a4162bdd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:27:17.269807 1372118 system_pods.go:89] "kindnet-l2rcj" [c50bbe3e-12b4-4007-aa20-dfd1b04d38aa] Running
	I1027 23:27:17.269816 1372118 system_pods.go:89] "kube-apiserver-embed-certs-790322" [3839b875-fa30-4534-b042-37b5493241ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:27:17.269827 1372118 system_pods.go:89] "kube-controller-manager-embed-certs-790322" [ebf1417a-4c48-4950-9e6b-85d4856dc0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:27:17.269833 1372118 system_pods.go:89] "kube-proxy-7lwt5" [5d8f2c0d-30b5-487c-9d9e-e7be86b3be39] Running
	I1027 23:27:17.269839 1372118 system_pods.go:89] "kube-scheduler-embed-certs-790322" [cd6b90e4-d691-4163-815e-56ff72e4ba2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:27:17.269844 1372118 system_pods.go:89] "storage-provisioner" [2d42c557-cbb9-445c-8bd8-7b481a959c11] Running
	I1027 23:27:17.269854 1372118 system_pods.go:126] duration metric: took 4.147832ms to wait for k8s-apps to be running ...
	I1027 23:27:17.269890 1372118 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:27:17.269953 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:27:17.285105 1372118 system_svc.go:56] duration metric: took 15.215681ms WaitForService to wait for kubelet
	I1027 23:27:17.285132 1372118 kubeadm.go:587] duration metric: took 9.415111469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:27:17.285152 1372118 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:27:17.288591 1372118 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:27:17.288620 1372118 node_conditions.go:123] node cpu capacity is 2
	I1027 23:27:17.288631 1372118 node_conditions.go:105] duration metric: took 3.474913ms to run NodePressure ...
	I1027 23:27:17.288644 1372118 start.go:242] waiting for startup goroutines ...
	I1027 23:27:17.288651 1372118 start.go:247] waiting for cluster config update ...
	I1027 23:27:17.288662 1372118 start.go:256] writing updated cluster config ...
	I1027 23:27:17.288954 1372118 ssh_runner.go:195] Run: rm -f paused
	I1027 23:27:17.293358 1372118 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:17.297645 1372118 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7czsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:17.069995 1369496 addons.go:514] duration metric: took 2.43947725s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:27:17.445817 1369496 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-336451" context rescaled to 1 replicas
	W1027 23:27:18.944917 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:19.303525 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:21.303757 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:20.944970 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:23.444340 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:25.444545 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:23.303865 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:25.305363 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:27.944636 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:29.945351 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:27.802993 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:29.805442 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:32.303094 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:31.945833 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:34.443546 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:34.303156 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:36.303987 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:36.444401 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:38.945276 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:38.803141 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:40.807249 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:40.946308 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:43.443932 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:45.444057 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:43.304281 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:45.315142 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:47.444601 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:49.944862 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:47.803124 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:49.803899 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:52.302643 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:51.951303 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:54.444066 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:54.303440 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:56.804763 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	I1027 23:27:57.303397 1372118 pod_ready.go:94] pod "coredns-66bc5c9577-7czsv" is "Ready"
	I1027 23:27:57.303428 1372118 pod_ready.go:86] duration metric: took 40.005747477s for pod "coredns-66bc5c9577-7czsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.306074 1372118 pod_ready.go:83] waiting for pod "etcd-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.310979 1372118 pod_ready.go:94] pod "etcd-embed-certs-790322" is "Ready"
	I1027 23:27:57.311008 1372118 pod_ready.go:86] duration metric: took 4.906875ms for pod "etcd-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.313335 1372118 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.317784 1372118 pod_ready.go:94] pod "kube-apiserver-embed-certs-790322" is "Ready"
	I1027 23:27:57.317811 1372118 pod_ready.go:86] duration metric: took 4.447226ms for pod "kube-apiserver-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.320275 1372118 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.501919 1372118 pod_ready.go:94] pod "kube-controller-manager-embed-certs-790322" is "Ready"
	I1027 23:27:57.501951 1372118 pod_ready.go:86] duration metric: took 181.642312ms for pod "kube-controller-manager-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.702272 1372118 pod_ready.go:83] waiting for pod "kube-proxy-7lwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.101593 1372118 pod_ready.go:94] pod "kube-proxy-7lwt5" is "Ready"
	I1027 23:27:58.101632 1372118 pod_ready.go:86] duration metric: took 399.333918ms for pod "kube-proxy-7lwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.302030 1372118 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.702130 1372118 pod_ready.go:94] pod "kube-scheduler-embed-certs-790322" is "Ready"
	I1027 23:27:58.702156 1372118 pod_ready.go:86] duration metric: took 400.098647ms for pod "kube-scheduler-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.702169 1372118 pod_ready.go:40] duration metric: took 41.408773009s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:58.771969 1372118 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:27:58.775340 1372118 out.go:179] * Done! kubectl is now configured to use "embed-certs-790322" cluster and "default" namespace by default
	W1027 23:27:56.944057 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	I1027 23:27:57.453799 1369496 node_ready.go:49] node "default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:57.453832 1369496 node_ready.go:38] duration metric: took 40.512898119s for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:27:57.453846 1369496 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:27:57.453908 1369496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:27:57.472544 1369496 api_server.go:72] duration metric: took 42.842462718s to wait for apiserver process to appear ...
	I1027 23:27:57.472572 1369496 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:27:57.472601 1369496 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1027 23:27:57.481723 1369496 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1027 23:27:57.482839 1369496 api_server.go:141] control plane version: v1.34.1
	I1027 23:27:57.482868 1369496 api_server.go:131] duration metric: took 10.289376ms to wait for apiserver health ...
	I1027 23:27:57.482876 1369496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:27:57.485982 1369496 system_pods.go:59] 8 kube-system pods found
	I1027 23:27:57.486032 1369496 system_pods.go:61] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.486041 1369496 system_pods.go:61] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.486050 1369496 system_pods.go:61] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.486055 1369496 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.486060 1369496 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.486065 1369496 system_pods.go:61] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.486070 1369496 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.486077 1369496 system_pods.go:61] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.486088 1369496 system_pods.go:74] duration metric: took 3.206486ms to wait for pod list to return data ...
	I1027 23:27:57.486097 1369496 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:27:57.488683 1369496 default_sa.go:45] found service account: "default"
	I1027 23:27:57.488755 1369496 default_sa.go:55] duration metric: took 2.651861ms for default service account to be created ...
	I1027 23:27:57.488771 1369496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:27:57.491648 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:57.491685 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.491692 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.491698 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.491705 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.491709 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.491714 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.491718 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.491724 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.491744 1369496 retry.go:31] will retry after 216.8039ms: missing components: kube-dns
	I1027 23:27:57.712499 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:57.712534 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.712541 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.712547 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.712552 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.712556 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.712569 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.712581 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.712591 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.712606 1369496 retry.go:31] will retry after 332.328897ms: missing components: kube-dns
	I1027 23:27:58.048510 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:58.048549 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:58.048555 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:58.048583 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:58.048595 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:58.048600 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:58.048605 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:58.048609 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:58.048621 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:58.048638 1369496 retry.go:31] will retry after 460.922768ms: missing components: kube-dns
	I1027 23:27:58.514497 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:58.514528 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Running
	I1027 23:27:58.514536 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:58.514541 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:58.514568 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:58.514583 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:58.514587 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:58.514591 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:58.514596 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Running
	I1027 23:27:58.514604 1369496 system_pods.go:126] duration metric: took 1.025828047s to wait for k8s-apps to be running ...
	I1027 23:27:58.514615 1369496 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:27:58.514685 1369496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:27:58.527910 1369496 system_svc.go:56] duration metric: took 13.284355ms WaitForService to wait for kubelet
	I1027 23:27:58.527991 1369496 kubeadm.go:587] duration metric: took 43.897912924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:27:58.528022 1369496 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:27:58.530975 1369496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:27:58.531012 1369496 node_conditions.go:123] node cpu capacity is 2
	I1027 23:27:58.531026 1369496 node_conditions.go:105] duration metric: took 2.998065ms to run NodePressure ...
	I1027 23:27:58.531040 1369496 start.go:242] waiting for startup goroutines ...
	I1027 23:27:58.531047 1369496 start.go:247] waiting for cluster config update ...
	I1027 23:27:58.531058 1369496 start.go:256] writing updated cluster config ...
	I1027 23:27:58.531349 1369496 ssh_runner.go:195] Run: rm -f paused
	I1027 23:27:58.535071 1369496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:58.540137 1369496 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.544988 1369496 pod_ready.go:94] pod "coredns-66bc5c9577-lzssb" is "Ready"
	I1027 23:27:58.545018 1369496 pod_ready.go:86] duration metric: took 4.849939ms for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.547774 1369496 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.560603 1369496 pod_ready.go:94] pod "etcd-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.560631 1369496 pod_ready.go:86] duration metric: took 12.829505ms for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.563118 1369496 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.567963 1369496 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.567990 1369496 pod_ready.go:86] duration metric: took 4.84856ms for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.570520 1369496 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.942942 1369496 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.942969 1369496 pod_ready.go:86] duration metric: took 372.417831ms for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.142563 1369496 pod_ready.go:83] waiting for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.540641 1369496 pod_ready.go:94] pod "kube-proxy-n4vzn" is "Ready"
	I1027 23:27:59.540665 1369496 pod_ready.go:86] duration metric: took 398.079189ms for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.741260 1369496 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:28:00.173655 1369496 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-336451" is "Ready"
	I1027 23:28:00.173689 1369496 pod_ready.go:86] duration metric: took 432.399523ms for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:28:00.173703 1369496 pod_ready.go:40] duration metric: took 1.638599587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:28:00.365146 1369496 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:28:00.384228 1369496 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-336451" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.171484375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=49cf5575-b3a4-40bf-b4ec-133995f8b132 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.172591418Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=854eafd1-47e5-4ea1-bd7b-f5b53d1d0538 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.172825145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.182788715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.183002313Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/96c53e68e39339af13f58632ff1639a1ff1909423528c4a4435b3b9d12dfd59c/merged/etc/passwd: no such file or directory"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.183026708Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/96c53e68e39339af13f58632ff1639a1ff1909423528c4a4435b3b9d12dfd59c/merged/etc/group: no such file or directory"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.183292492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.201008343Z" level=info msg="Created container 685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479: kube-system/storage-provisioner/storage-provisioner" id=854eafd1-47e5-4ea1-bd7b-f5b53d1d0538 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.201906743Z" level=info msg="Starting container: 685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479" id=f4e1fe5a-ac75-40fe-a18c-ed73938b2b06 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:27:46 embed-certs-790322 crio[651]: time="2025-10-27T23:27:46.206003974Z" level=info msg="Started container" PID=1648 containerID=685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479 description=kube-system/storage-provisioner/storage-provisioner id=f4e1fe5a-ac75-40fe-a18c-ed73938b2b06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af0e082e7fe94b2dc2398c07663ed9cefad54bc74363d57c46545dfecb63d66b
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.745763341Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.74973129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.749766457Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.7497952Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.753472823Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.753509591Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.753529644Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.756955793Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.756989779Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.757013049Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.760380358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.760415838Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.760443793Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.763889084Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:27:55 embed-certs-790322 crio[651]: time="2025-10-27T23:27:55.763924826Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	685f12b4b12a0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   af0e082e7fe94       storage-provisioner                          kube-system
	54aca756edf6b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   d4a8b3957a9dd       dashboard-metrics-scraper-6ffb444bf9-57wqx   kubernetes-dashboard
	b97f21439a7b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   2bad7d37d6aac       kubernetes-dashboard-855c9754f9-m4ssq        kubernetes-dashboard
	e95ec2573027c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   b8c75b476bbbd       busybox                                      default
	7cb3f092409e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   ad8e624e74350       kube-proxy-7lwt5                             kube-system
	81dc02aac9076       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   af0e082e7fe94       storage-provisioner                          kube-system
	dd862bc0975c4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   486302e90a231       coredns-66bc5c9577-7czsv                     kube-system
	a25501fea7b4d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   9f84a593e81d2       kindnet-l2rcj                                kube-system
	99cfb8a94d79f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   915d34822b240       kube-apiserver-embed-certs-790322            kube-system
	2dd33085839f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c455a9e029f55       kube-scheduler-embed-certs-790322            kube-system
	04d779de2ba59       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6d3d8c2179fdd       etcd-embed-certs-790322                      kube-system
	4cca3101ea453       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   d7a169a88a7e9       kube-controller-manager-embed-certs-790322   kube-system
	
	
	==> coredns [dd862bc0975c47b020906fd67965252737767357cd14270fa3ebcf0e580227ec] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45836 - 52467 "HINFO IN 7358259606901163704.6601914373597417710. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025491328s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-790322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-790322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=embed-certs-790322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_25_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:25:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-790322
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:28:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:25:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:27:45 +0000   Mon, 27 Oct 2025 23:26:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-790322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                303b75c8-bfe7-43fd-a2ff-1f7c0bfb24ff
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-7czsv                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-embed-certs-790322                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-l2rcj                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m29s
	  kube-system                 kube-apiserver-embed-certs-790322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-controller-manager-embed-certs-790322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-7lwt5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-scheduler-embed-certs-790322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-57wqx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m4ssq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m27s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m29s                  node-controller  Node embed-certs-790322 event: Registered Node embed-certs-790322 in Controller
	  Normal   NodeReady                107s                   kubelet          Node embed-certs-790322 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-790322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-790322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-790322 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node embed-certs-790322 event: Registered Node embed-certs-790322 in Controller
	
	
	==> dmesg <==
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248] <==
	{"level":"warn","ts":"2025-10-27T23:27:11.750633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.796987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.823257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.854584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.889808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.923831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:11.959318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.027426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.078927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.135953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.177567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.218449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.274274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.292874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.319980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.336639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.354479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.375526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.395082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.419404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.486459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.519235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.543412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.568093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:12.634895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44422","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:28:16 up  6:10,  0 user,  load average: 3.39, 3.95, 3.39
	Linux embed-certs-790322 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a25501fea7b4d9ca522fa06ad5ad513cb99d9c3bdc51bc7296798233ca0230d1] <==
	I1027 23:27:15.456899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:27:15.463893       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:27:15.464026       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:27:15.464038       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:27:15.464049       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:27:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:27:15.741577       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:27:15.741597       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:27:15.741606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:27:15.741936       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:27:45.741645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:27:45.741645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:27:45.742573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:27:45.742573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1027 23:27:47.341737       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:27:47.341767       1 metrics.go:72] Registering metrics
	I1027 23:27:47.341840       1 controller.go:711] "Syncing nftables rules"
	I1027 23:27:55.745409       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:27:55.745465       1 main.go:301] handling current node
	I1027 23:28:05.747704       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:28:05.747742       1 main.go:301] handling current node
	I1027 23:28:15.746683       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 23:28:15.746716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99cfb8a94d79f6c5bfe51cd7b6b319af3c0441589946869eae5fa78fc69cdf42] <==
	I1027 23:27:14.037369       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:27:14.037410       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:27:14.067468       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:27:14.074812       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:27:14.075203       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 23:27:14.090316       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:27:14.271380       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 23:27:14.271448       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:27:14.271785       1 aggregator.go:171] initial CRD sync complete...
	I1027 23:27:14.271802       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 23:27:14.271809       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:27:14.271816       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:27:14.278618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1027 23:27:14.320464       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:27:14.548139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:27:14.883316       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:27:16.066212       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:27:16.321853       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:27:16.447197       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:27:16.509497       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:27:16.857379       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.92.227"}
	I1027 23:27:16.922770       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.143.189"}
	I1027 23:27:19.501807       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:27:19.552309       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:27:19.601804       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109] <==
	I1027 23:27:19.123418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:27:19.136592       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:27:19.139841       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:27:19.139964       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:27:19.140233       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-790322"
	I1027 23:27:19.140288       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 23:27:19.146571       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 23:27:19.146619       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:27:19.146669       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 23:27:19.146732       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 23:27:19.146829       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:27:19.146845       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:27:19.146852       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:27:19.146906       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 23:27:19.146925       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:27:19.146573       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 23:27:19.148782       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:27:19.152740       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:27:19.156800       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:27:19.153047       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 23:27:19.157735       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:27:19.153061       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 23:27:19.159719       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:27:19.159869       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:27:19.163151       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [7cb3f092409e678570d4a74471cfdaa27f1dffbc700779b3a9bb259a5c2669ab] <==
	I1027 23:27:16.482972       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:27:16.733898       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:27:16.843408       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:27:16.843440       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 23:27:16.843519       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:27:16.963414       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:27:16.963470       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:27:16.967423       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:27:16.967682       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:27:16.967696       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:27:16.974484       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:27:16.974511       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:27:16.974821       1 config.go:200] "Starting service config controller"
	I1027 23:27:16.974828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:27:16.975143       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:27:16.975150       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:27:16.975513       1 config.go:309] "Starting node config controller"
	I1027 23:27:16.975520       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:27:16.975526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:27:17.075317       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:27:17.075352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 23:27:17.075403       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c] <==
	I1027 23:27:11.673602       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:27:14.404940       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:27:14.404971       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:27:14.428645       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:27:14.428752       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:27:14.428769       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:27:14.428790       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:27:14.450610       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:27:14.450645       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:27:14.450666       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:27:14.450676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:27:14.531374       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:27:14.551711       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:27:14.551832       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:27:19 embed-certs-790322 kubelet[779]: I1027 23:27:19.808698     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nr8k\" (UniqueName: \"kubernetes.io/projected/88b8fc67-6604-45fe-b0d8-30629563166a-kube-api-access-6nr8k\") pod \"dashboard-metrics-scraper-6ffb444bf9-57wqx\" (UID: \"88b8fc67-6604-45fe-b0d8-30629563166a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx"
	Oct 27 23:27:19 embed-certs-790322 kubelet[779]: I1027 23:27:19.908921     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/00ed63f7-8d59-4ed6-84ce-e3dc2e39663d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m4ssq\" (UID: \"00ed63f7-8d59-4ed6-84ce-e3dc2e39663d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m4ssq"
	Oct 27 23:27:19 embed-certs-790322 kubelet[779]: I1027 23:27:19.908992     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6224z\" (UniqueName: \"kubernetes.io/projected/00ed63f7-8d59-4ed6-84ce-e3dc2e39663d-kube-api-access-6224z\") pod \"kubernetes-dashboard-855c9754f9-m4ssq\" (UID: \"00ed63f7-8d59-4ed6-84ce-e3dc2e39663d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m4ssq"
	Oct 27 23:27:20 embed-certs-790322 kubelet[779]: W1027 23:27:20.345795     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2a16ed0b5f10e84a722f3d990b387166575e581d36210ced3a6ec1124701c88/crio-2bad7d37d6aacac6f37bded35fb1ab3519d753073ce6a903bfc9104b91dfe1e2 WatchSource:0}: Error finding container 2bad7d37d6aacac6f37bded35fb1ab3519d753073ce6a903bfc9104b91dfe1e2: Status 404 returned error can't find the container with id 2bad7d37d6aacac6f37bded35fb1ab3519d753073ce6a903bfc9104b91dfe1e2
	Oct 27 23:27:24 embed-certs-790322 kubelet[779]: I1027 23:27:24.089171     779 scope.go:117] "RemoveContainer" containerID="08789c2214c0b55112414297af534a052e12d73ffd34eab97a628dd133b052dd"
	Oct 27 23:27:25 embed-certs-790322 kubelet[779]: I1027 23:27:25.095930     779 scope.go:117] "RemoveContainer" containerID="08789c2214c0b55112414297af534a052e12d73ffd34eab97a628dd133b052dd"
	Oct 27 23:27:25 embed-certs-790322 kubelet[779]: I1027 23:27:25.096279     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:25 embed-certs-790322 kubelet[779]: E1027 23:27:25.096440     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:26 embed-certs-790322 kubelet[779]: I1027 23:27:26.103492     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:26 embed-certs-790322 kubelet[779]: E1027 23:27:26.103693     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:30 embed-certs-790322 kubelet[779]: I1027 23:27:30.019202     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:30 embed-certs-790322 kubelet[779]: E1027 23:27:30.019426     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:44 embed-certs-790322 kubelet[779]: I1027 23:27:44.820593     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: I1027 23:27:45.164961     779 scope.go:117] "RemoveContainer" containerID="69b31109dfb216de334a4eb880b9900e2aa6d1f727120ce6b45cef8a71fe5927"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: I1027 23:27:45.165298     779 scope.go:117] "RemoveContainer" containerID="54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: E1027 23:27:45.165458     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:27:45 embed-certs-790322 kubelet[779]: I1027 23:27:45.219592     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m4ssq" podStartSLOduration=17.708399901 podStartE2EDuration="26.219572635s" podCreationTimestamp="2025-10-27 23:27:19 +0000 UTC" firstStartedPulling="2025-10-27 23:27:20.350219964 +0000 UTC m=+13.872354878" lastFinishedPulling="2025-10-27 23:27:28.861392698 +0000 UTC m=+22.383527612" observedRunningTime="2025-10-27 23:27:29.132712539 +0000 UTC m=+22.654847461" watchObservedRunningTime="2025-10-27 23:27:45.219572635 +0000 UTC m=+38.741707549"
	Oct 27 23:27:46 embed-certs-790322 kubelet[779]: I1027 23:27:46.169579     779 scope.go:117] "RemoveContainer" containerID="81dc02aac9076639d9e778fbd45c09fa3c0cf603955a2ad1a2dad43abd3483e3"
	Oct 27 23:27:50 embed-certs-790322 kubelet[779]: I1027 23:27:50.012351     779 scope.go:117] "RemoveContainer" containerID="54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	Oct 27 23:27:50 embed-certs-790322 kubelet[779]: E1027 23:27:50.012565     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:28:04 embed-certs-790322 kubelet[779]: I1027 23:28:04.820767     779 scope.go:117] "RemoveContainer" containerID="54aca756edf6b0a8c3a0290a2ca66f5bbb838e6236a4f936a4d1c751c77e8379"
	Oct 27 23:28:04 embed-certs-790322 kubelet[779]: E1027 23:28:04.821479     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57wqx_kubernetes-dashboard(88b8fc67-6604-45fe-b0d8-30629563166a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57wqx" podUID="88b8fc67-6604-45fe-b0d8-30629563166a"
	Oct 27 23:28:11 embed-certs-790322 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:28:11 embed-certs-790322 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:28:11 embed-certs-790322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b97f21439a7b96012b6e8dfefc7cdd720fd915384d907a5cf119f81e99ecad9c] <==
	2025/10/27 23:27:28 Using namespace: kubernetes-dashboard
	2025/10/27 23:27:28 Using in-cluster config to connect to apiserver
	2025/10/27 23:27:28 Using secret token for csrf signing
	2025/10/27 23:27:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:27:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:27:28 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 23:27:28 Generating JWE encryption key
	2025/10/27 23:27:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:27:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:27:29 Initializing JWE encryption key from synchronized object
	2025/10/27 23:27:29 Creating in-cluster Sidecar client
	2025/10/27 23:27:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:27:29 Serving insecurely on HTTP port: 9090
	2025/10/27 23:27:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:27:28 Starting overwatch
	
	
	==> storage-provisioner [685f12b4b12a0f9d4b7e38925a0ba384cfd8201d295e923f85d5c37491f0f479] <==
	W1027 23:27:46.233873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:49.689465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:53.950122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:57.549091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:00.604728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.627264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.632894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:28:03.633129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:28:03.633344       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-790322_4dc5a071-4fab-4d1f-bf5b-806aa5d8a4a0!
	I1027 23:28:03.633555       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe00f650-32eb-4f9d-b262-03caa020ad86", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-790322_4dc5a071-4fab-4d1f-bf5b-806aa5d8a4a0 became leader
	W1027 23:28:03.642251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.645945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:28:03.733605       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-790322_4dc5a071-4fab-4d1f-bf5b-806aa5d8a4a0!
	W1027 23:28:05.648723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:05.655727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:07.660350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:07.665594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:09.668847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:09.674524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:11.681143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:11.700881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:13.703760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:13.708446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:15.711386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:15.720325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [81dc02aac9076639d9e778fbd45c09fa3c0cf603955a2ad1a2dad43abd3483e3] <==
	I1027 23:27:15.989732       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:27:46.018952       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790322 -n embed-certs-790322
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790322 -n embed-certs-790322: exit status 2 (345.423088ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-790322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (339.249677ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:28:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-336451 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-336451 describe deploy/metrics-server -n kube-system: exit status 1 (89.77045ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-336451 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-336451
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-336451:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59",
	        "Created": "2025-10-27T23:26:41.328254644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1369882,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:26:41.390307923Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/hostname",
	        "HostsPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/hosts",
	        "LogPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59-json.log",
	        "Name": "/default-k8s-diff-port-336451",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-336451:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-336451",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59",
	                "LowerDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-336451",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-336451/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-336451",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-336451",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-336451",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f36089a8364aad2736e6349d01388dc6a1e6a221cdf7fa96b0f6db689bf27301",
	            "SandboxKey": "/var/run/docker/netns/f36089a8364a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34584"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34585"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34588"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34586"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34587"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-336451": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:64:45:ec:3f:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "55da9c2196e319a24b4d34567d8cd7569236804748720d465d6d478b5766bd82",
	                    "EndpointID": "ca1f04de3ed4714fe843ccf70c636a7b3ed35c455ec2f38f9a0cbb79546bb7a4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-336451",
	                        "8835f98b0ace"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-336451 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-336451 logs -n 25: (1.409284927s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:23 UTC │
	│ start   │ -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:23 UTC │ 27 Oct 25 23:24 UTC │
	│ image   │ old-k8s-version-477179 image list --format=json                                                                                                                                                                                               │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:24 UTC │
	│ pause   │ -p old-k8s-version-477179 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │                     │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:24 UTC │ 27 Oct 25 23:25 UTC │
	│ delete  │ -p old-k8s-version-477179                                                                                                                                                                                                                     │ old-k8s-version-477179       │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ stop    │ -p no-preload-947754 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:26:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:26:57.629666 1372118 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:26:57.630326 1372118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:57.630364 1372118 out.go:374] Setting ErrFile to fd 2...
	I1027 23:26:57.630435 1372118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:26:57.630762 1372118 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:26:57.631216 1372118 out.go:368] Setting JSON to false
	I1027 23:26:57.632240 1372118 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22167,"bootTime":1761585451,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:26:57.632349 1372118 start.go:143] virtualization:  
	I1027 23:26:57.635499 1372118 out.go:179] * [embed-certs-790322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:26:57.639638 1372118 notify.go:221] Checking for updates...
	I1027 23:26:57.640621 1372118 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:26:57.646013 1372118 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:26:57.649169 1372118 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:26:57.652247 1372118 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:26:57.655512 1372118 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:26:57.658358 1372118 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:26:57.661854 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:26:57.662570 1372118 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:26:57.719881 1372118 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:26:57.719979 1372118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:57.816133 1372118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:26:57.801869037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:57.816234 1372118 docker.go:318] overlay module found
	I1027 23:26:57.819654 1372118 out.go:179] * Using the docker driver based on existing profile
	I1027 23:26:57.822419 1372118 start.go:307] selected driver: docker
	I1027 23:26:57.822435 1372118 start.go:928] validating driver "docker" against &{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:26:57.822557 1372118 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:26:57.823249 1372118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:26:57.911780 1372118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 23:26:57.902033646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:26:57.912102 1372118 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:26:57.912132 1372118 cni.go:84] Creating CNI manager for ""
	I1027 23:26:57.912183 1372118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:26:57.912218 1372118 start.go:351] cluster config:
	{Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:26:57.915350 1372118 out.go:179] * Starting "embed-certs-790322" primary control-plane node in "embed-certs-790322" cluster
	I1027 23:26:57.918215 1372118 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:26:57.921146 1372118 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:26:57.923980 1372118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:26:57.924038 1372118 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:26:57.924062 1372118 cache.go:59] Caching tarball of preloaded images
	I1027 23:26:57.924148 1372118 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:26:57.924157 1372118 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:26:57.924286 1372118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json ...
	I1027 23:26:57.924490 1372118 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:26:57.946720 1372118 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:26:57.946741 1372118 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:26:57.946755 1372118 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:26:57.946778 1372118 start.go:360] acquireMachinesLock for embed-certs-790322: {Name:mk0a741ca206e2e37bd9112a34c7fc5ed8359e78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:26:57.946830 1372118 start.go:364] duration metric: took 33.239µs to acquireMachinesLock for "embed-certs-790322"
	I1027 23:26:57.946849 1372118 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:26:57.946854 1372118 fix.go:55] fixHost starting: 
	I1027 23:26:57.947100 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:26:57.980727 1372118 fix.go:113] recreateIfNeeded on embed-certs-790322: state=Stopped err=<nil>
	W1027 23:26:57.980756 1372118 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:26:56.025667 1369496 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:26:56.026130 1369496 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:26:56.477016 1369496 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:26:56.671259 1369496 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:26:57.762794 1369496 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:26:58.081211 1369496 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:26:58.805554 1369496 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:26:58.808233 1369496 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:26:58.825117 1369496 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:26:58.828793 1369496 out.go:252]   - Booting up control plane ...
	I1027 23:26:58.828915 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:26:58.840658 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:26:58.842136 1369496 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:26:58.864049 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:26:58.864187 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:26:58.873660 1369496 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:26:58.874262 1369496 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:26:58.874539 1369496 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:26:59.080521 1369496 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:26:59.080651 1369496 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:27:00.581426 1369496 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501339765s
	I1027 23:27:00.584884 1369496 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:27:00.584976 1369496 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1027 23:27:00.585295 1369496 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:27:00.585396 1369496 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 23:26:57.983904 1372118 out.go:252] * Restarting existing docker container for "embed-certs-790322" ...
	I1027 23:26:57.983987 1372118 cli_runner.go:164] Run: docker start embed-certs-790322
	I1027 23:26:58.327945 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:26:58.366280 1372118 kic.go:430] container "embed-certs-790322" state is running.
	I1027 23:26:58.367082 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:26:58.400611 1372118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/config.json ...
	I1027 23:26:58.400861 1372118 machine.go:94] provisionDockerMachine start ...
	I1027 23:26:58.400931 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:26:58.426994 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:26:58.427322 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:26:58.427331 1372118 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:26:58.428275 1372118 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50640->127.0.0.1:34589: read: connection reset by peer
	I1027 23:27:01.622790 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790322
	
	I1027 23:27:01.622827 1372118 ubuntu.go:182] provisioning hostname "embed-certs-790322"
	I1027 23:27:01.622918 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:01.668222 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:01.668540 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:01.668557 1372118 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-790322 && echo "embed-certs-790322" | sudo tee /etc/hostname
	I1027 23:27:01.880089 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-790322
	
	I1027 23:27:01.880214 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:01.914678 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:01.914993 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:01.915017 1372118 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-790322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-790322/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-790322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:27:02.100016 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:27:02.100086 1372118 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:27:02.100146 1372118 ubuntu.go:190] setting up certificates
	I1027 23:27:02.100174 1372118 provision.go:84] configureAuth start
	I1027 23:27:02.100252 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:27:02.126984 1372118 provision.go:143] copyHostCerts
	I1027 23:27:02.127050 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:27:02.127065 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:27:02.127143 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:27:02.127251 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:27:02.127257 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:27:02.127282 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:27:02.127340 1372118 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:27:02.127344 1372118 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:27:02.127366 1372118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:27:02.127412 1372118 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.embed-certs-790322 san=[127.0.0.1 192.168.85.2 embed-certs-790322 localhost minikube]
	I1027 23:27:03.574875 1369496 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.98960737s
	I1027 23:27:02.724924 1372118 provision.go:177] copyRemoteCerts
	I1027 23:27:02.725053 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:27:02.725125 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:02.742703 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:02.855688 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:27:02.901503 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:27:02.931477 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:27:02.967998 1372118 provision.go:87] duration metric: took 867.785329ms to configureAuth
	I1027 23:27:02.968070 1372118 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:27:02.968305 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:02.968463 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:02.996153 1372118 main.go:143] libmachine: Using SSH client type: native
	I1027 23:27:02.996460 1372118 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34589 <nil> <nil>}
	I1027 23:27:02.996478 1372118 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:27:03.467739 1372118 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:27:03.467809 1372118 machine.go:97] duration metric: took 5.066930053s to provisionDockerMachine
	I1027 23:27:03.467856 1372118 start.go:293] postStartSetup for "embed-certs-790322" (driver="docker")
	I1027 23:27:03.467893 1372118 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:27:03.467987 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:27:03.468071 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.493180 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.623500 1372118 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:27:03.627633 1372118 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:27:03.627671 1372118 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:27:03.627684 1372118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:27:03.627749 1372118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:27:03.627833 1372118 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:27:03.627947 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:27:03.644048 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:27:03.666091 1372118 start.go:296] duration metric: took 198.192776ms for postStartSetup
	I1027 23:27:03.666182 1372118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:27:03.666245 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.682357 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.791652 1372118 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:27:03.798570 1372118 fix.go:57] duration metric: took 5.851708801s for fixHost
	I1027 23:27:03.798605 1372118 start.go:83] releasing machines lock for "embed-certs-790322", held for 5.851767157s
	I1027 23:27:03.798684 1372118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-790322
	I1027 23:27:03.828892 1372118 ssh_runner.go:195] Run: cat /version.json
	I1027 23:27:03.828957 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.829216 1372118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:27:03.829280 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:03.879957 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:03.888974 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:04.102180 1372118 ssh_runner.go:195] Run: systemctl --version
	I1027 23:27:04.115296 1372118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:27:04.181664 1372118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:27:04.191270 1372118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:27:04.191392 1372118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:27:04.204722 1372118 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:27:04.204802 1372118 start.go:496] detecting cgroup driver to use...
	I1027 23:27:04.204849 1372118 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:27:04.204926 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:27:04.220880 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:27:04.240791 1372118 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:27:04.240899 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:27:04.258648 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:27:04.286284 1372118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:27:04.454855 1372118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:27:04.644920 1372118 docker.go:234] disabling docker service ...
	I1027 23:27:04.645058 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:27:04.660850 1372118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:27:04.675695 1372118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:27:04.868099 1372118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:27:05.063828 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:27:05.082647 1372118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:27:05.107749 1372118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:27:05.107822 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.121233 1372118 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:27:05.121307 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.143748 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.160586 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.179086 1372118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:27:05.191735 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.207415 1372118 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.218949 1372118 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:27:05.235732 1372118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:27:05.248461 1372118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:27:05.264882 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:05.462697 1372118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:27:05.711167 1372118 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:27:05.711239 1372118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:27:05.715341 1372118 start.go:564] Will wait 60s for crictl version
	I1027 23:27:05.715407 1372118 ssh_runner.go:195] Run: which crictl
	I1027 23:27:05.718946 1372118 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:27:05.766824 1372118 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:27:05.766910 1372118 ssh_runner.go:195] Run: crio --version
	I1027 23:27:05.820172 1372118 ssh_runner.go:195] Run: crio --version
	I1027 23:27:05.871373 1372118 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:27:05.874464 1372118 cli_runner.go:164] Run: docker network inspect embed-certs-790322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:27:05.904076 1372118 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:27:05.908444 1372118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:27:05.923731 1372118 kubeadm.go:884] updating cluster {Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:27:05.923843 1372118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:27:05.923904 1372118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:27:06.009813 1372118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:27:06.009911 1372118 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:27:06.010028 1372118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:27:06.059961 1372118 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:27:06.059987 1372118 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:27:06.059996 1372118 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:27:06.060099 1372118 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-790322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:27:06.060192 1372118 ssh_runner.go:195] Run: crio config
	I1027 23:27:06.181535 1372118 cni.go:84] Creating CNI manager for ""
	I1027 23:27:06.181558 1372118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:27:06.181577 1372118 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:27:06.181600 1372118 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-790322 NodeName:embed-certs-790322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:27:06.181732 1372118 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-790322"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:27:06.181812 1372118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:27:06.192912 1372118 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:27:06.192995 1372118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:27:06.203308 1372118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 23:27:06.218584 1372118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:27:06.232422 1372118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 23:27:06.247296 1372118 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:27:06.251492 1372118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:27:06.261925 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:06.457092 1372118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:06.478856 1372118 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322 for IP: 192.168.85.2
	I1027 23:27:06.478875 1372118 certs.go:195] generating shared ca certs ...
	I1027 23:27:06.478891 1372118 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:06.479031 1372118 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:27:06.479080 1372118 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:27:06.479090 1372118 certs.go:257] generating profile certs ...
	I1027 23:27:06.479179 1372118 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/client.key
	I1027 23:27:06.479248 1372118 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.key.f07237cc
	I1027 23:27:06.479292 1372118 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.key
	I1027 23:27:06.479402 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:27:06.479436 1372118 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:27:06.479448 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:27:06.479471 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:27:06.479496 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:27:06.479722 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:27:06.479825 1372118 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:27:06.480838 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:27:06.546023 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:27:06.590814 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:27:06.650028 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:27:06.677604 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 23:27:06.733526 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:27:06.770512 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:27:06.794546 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/embed-certs-790322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 23:27:06.817673 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:27:06.845792 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:27:06.874996 1372118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:27:06.907763 1372118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:27:06.939835 1372118 ssh_runner.go:195] Run: openssl version
	I1027 23:27:06.947898 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:27:06.961316 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:27:06.967846 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:27:06.967971 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:27:07.018751 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:27:07.027283 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:27:07.035876 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.040843 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.040991 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:27:07.085555 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:27:07.094489 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:27:07.103537 1372118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.108009 1372118 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.108154 1372118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:27:07.150730 1372118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:27:07.160134 1372118 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:27:07.164988 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:27:07.214638 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:27:07.268298 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:27:07.344572 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:27:07.414155 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:27:07.508607 1372118 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:27:07.566964 1372118 kubeadm.go:401] StartCluster: {Name:embed-certs-790322 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-790322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:27:07.567056 1372118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:27:07.567131 1372118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:27:07.721596 1372118 cri.go:89] found id: "2dd33085839f4b3ec48e1cee1be0d27c1b29b3ebaf8e0437c48d7c3fc9c0602c"
	I1027 23:27:07.721621 1372118 cri.go:89] found id: "04d779de2ba59c56b41e444a5f41bcb57f87bfbcebe9ef9955704cdc0d568248"
	I1027 23:27:07.721626 1372118 cri.go:89] found id: "4cca3101ea45339f788b56e37456e84838b100b57b1522533eaa76028f279109"
	I1027 23:27:07.721636 1372118 cri.go:89] found id: ""
	I1027 23:27:07.721689 1372118 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:27:07.809334 1372118 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:27:07Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:27:07.809421 1372118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:27:07.830014 1372118 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:27:07.830034 1372118 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:27:07.830105 1372118 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:27:07.845122 1372118 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:27:07.845557 1372118 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-790322" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:07.845661 1372118 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-790322" cluster setting kubeconfig missing "embed-certs-790322" context setting]
	I1027 23:27:07.845942 1372118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.847319 1372118 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:27:07.868638 1372118 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:27:07.868673 1372118 kubeadm.go:602] duration metric: took 38.632535ms to restartPrimaryControlPlane
	I1027 23:27:07.868682 1372118 kubeadm.go:403] duration metric: took 301.730067ms to StartCluster
	I1027 23:27:07.868697 1372118 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.868756 1372118 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:07.869767 1372118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:07.869989 1372118 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:27:07.870257 1372118 config.go:182] Loaded profile config "embed-certs-790322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:07.870306 1372118 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:27:07.870424 1372118 addons.go:69] Setting dashboard=true in profile "embed-certs-790322"
	I1027 23:27:07.870374 1372118 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-790322"
	I1027 23:27:07.870449 1372118 addons.go:238] Setting addon dashboard=true in "embed-certs-790322"
	I1027 23:27:07.870456 1372118 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-790322"
	W1027 23:27:07.870457 1372118 addons.go:247] addon dashboard should already be in state true
	W1027 23:27:07.870462 1372118 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:27:07.870482 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.870485 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.870932 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.870947 1372118 addons.go:69] Setting default-storageclass=true in profile "embed-certs-790322"
	I1027 23:27:07.870960 1372118 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-790322"
	I1027 23:27:07.871199 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.870934 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.874327 1372118 out.go:179] * Verifying Kubernetes components...
	I1027 23:27:07.877483 1372118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:07.921642 1372118 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:27:07.923871 1372118 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:07.923902 1372118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:27:07.923973 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:07.926608 1372118 addons.go:238] Setting addon default-storageclass=true in "embed-certs-790322"
	W1027 23:27:07.926636 1372118 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:27:07.926662 1372118 host.go:66] Checking if "embed-certs-790322" exists ...
	I1027 23:27:07.927094 1372118 cli_runner.go:164] Run: docker container inspect embed-certs-790322 --format={{.State.Status}}
	I1027 23:27:07.930680 1372118 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:27:07.934972 1372118 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:27:07.589168 1369496 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.003295676s
	I1027 23:27:08.586654 1369496 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.00161344s
	I1027 23:27:08.617820 1369496 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:27:08.651361 1369496 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:27:08.672815 1369496 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:27:08.673024 1369496 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-336451 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:27:08.695558 1369496 kubeadm.go:319] [bootstrap-token] Using token: j9lm8r.7dur7mpnl819twae
	I1027 23:27:08.698544 1369496 out.go:252]   - Configuring RBAC rules ...
	I1027 23:27:08.698661 1369496 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:27:08.705744 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:27:08.723693 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:27:08.731147 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:27:08.736342 1369496 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:27:08.745908 1369496 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:27:09.017778 1369496 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:27:09.574635 1369496 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:27:09.998756 1369496 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:27:10.000172 1369496 kubeadm.go:319] 
	I1027 23:27:10.000265 1369496 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:27:10.000277 1369496 kubeadm.go:319] 
	I1027 23:27:10.000361 1369496 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:27:10.000371 1369496 kubeadm.go:319] 
	I1027 23:27:10.000398 1369496 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:27:10.000892 1369496 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:27:10.000961 1369496 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:27:10.000971 1369496 kubeadm.go:319] 
	I1027 23:27:10.001030 1369496 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:27:10.001039 1369496 kubeadm.go:319] 
	I1027 23:27:10.001091 1369496 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:27:10.001099 1369496 kubeadm.go:319] 
	I1027 23:27:10.001163 1369496 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:27:10.001249 1369496 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:27:10.001327 1369496 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:27:10.001335 1369496 kubeadm.go:319] 
	I1027 23:27:10.001629 1369496 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:27:10.001721 1369496 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:27:10.001731 1369496 kubeadm.go:319] 
	I1027 23:27:10.002145 1369496 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token j9lm8r.7dur7mpnl819twae \
	I1027 23:27:10.002273 1369496 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:27:10.002492 1369496 kubeadm.go:319] 	--control-plane 
	I1027 23:27:10.002509 1369496 kubeadm.go:319] 
	I1027 23:27:10.002795 1369496 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:27:10.002815 1369496 kubeadm.go:319] 
	I1027 23:27:10.003080 1369496 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token j9lm8r.7dur7mpnl819twae \
	I1027 23:27:10.003401 1369496 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:27:10.009000 1369496 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:27:10.009283 1369496 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:27:10.009410 1369496 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:27:10.009468 1369496 cni.go:84] Creating CNI manager for ""
	I1027 23:27:10.009482 1369496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:27:10.013092 1369496 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 23:27:10.016073 1369496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:27:10.032899 1369496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:27:10.032926 1369496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:27:10.084560 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:27:10.555414 1369496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:27:10.555538 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:10.555613 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-336451 minikube.k8s.io/updated_at=2025_10_27T23_27_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=default-k8s-diff-port-336451 minikube.k8s.io/primary=true
	I1027 23:27:07.942570 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:27:07.942597 1372118 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:27:07.942676 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:07.970507 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:07.977164 1372118 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:07.977185 1372118 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:27:07.977247 1372118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-790322
	I1027 23:27:08.010762 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:08.030543 1372118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34589 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/embed-certs-790322/id_rsa Username:docker}
	I1027 23:27:08.342954 1372118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:08.363752 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:08.405304 1372118 node_ready.go:35] waiting up to 6m0s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:27:08.479620 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:27:08.479646 1372118 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:27:08.508486 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:08.515674 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:27:08.515702 1372118 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:27:08.610848 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:27:08.610914 1372118 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:27:08.743517 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:27:08.743586 1372118 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:27:08.814050 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:27:08.814117 1372118 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:27:08.837148 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:27:08.837221 1372118 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:27:08.859763 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:27:08.859839 1372118 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:27:08.880028 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:27:08.880102 1372118 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:27:08.907564 1372118 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:27:08.907638 1372118 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:27:08.935516 1372118 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:27:10.876897 1369496 ops.go:34] apiserver oom_adj: -16
	I1027 23:27:10.876997 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:11.377135 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:11.877315 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:12.377098 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:12.877634 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:13.377806 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:13.877368 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:14.378067 1369496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:27:14.628184 1369496 kubeadm.go:1114] duration metric: took 4.072679138s to wait for elevateKubeSystemPrivileges
	I1027 23:27:14.628211 1369496 kubeadm.go:403] duration metric: took 22.864632047s to StartCluster
	I1027 23:27:14.628228 1369496 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:14.628287 1369496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:27:14.629803 1369496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:27:14.630050 1369496 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:27:14.630138 1369496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:27:14.630441 1369496 config.go:182] Loaded profile config "default-k8s-diff-port-336451": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:27:14.630483 1369496 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:27:14.630541 1369496 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-336451"
	I1027 23:27:14.630555 1369496 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-336451"
	I1027 23:27:14.630575 1369496 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:27:14.631062 1369496 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-336451"
	I1027 23:27:14.631080 1369496 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-336451"
	I1027 23:27:14.631353 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.631693 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.635148 1369496 out.go:179] * Verifying Kubernetes components...
	I1027 23:27:14.638515 1369496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:27:14.668067 1369496 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-336451"
	I1027 23:27:14.668115 1369496 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:27:14.668539 1369496 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:27:14.675228 1369496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:27:14.680124 1369496 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:14.680150 1369496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:27:14.680213 1369496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:27:14.704695 1369496 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:14.704721 1369496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:27:14.704784 1369496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:27:14.731557 1369496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34584 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:27:14.742439 1369496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34584 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:27:15.224704 1369496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:27:15.318545 1369496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:27:15.390982 1369496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:27:15.391153 1369496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:27:16.939430 1369496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.714694755s)
	I1027 23:27:16.939476 1369496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.620913736s)
	I1027 23:27:16.939769 1369496 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.548578836s)
	I1027 23:27:16.940917 1369496 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:27:16.941165 1369496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.550112241s)
	I1027 23:27:16.941180 1369496 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 23:27:17.067100 1369496 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:27:13.874223 1372118 node_ready.go:49] node "embed-certs-790322" is "Ready"
	I1027 23:27:13.874298 1372118 node_ready.go:38] duration metric: took 5.468960816s for node "embed-certs-790322" to be "Ready" ...
	I1027 23:27:13.874327 1372118 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:27:13.874432 1372118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:27:17.240012 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.876173866s)
	I1027 23:27:17.240079 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.731569168s)
	I1027 23:27:17.240439 1372118 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.304837211s)
	I1027 23:27:17.241092 1372118 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.366626363s)
	I1027 23:27:17.241118 1372118 api_server.go:72] duration metric: took 9.371098403s to wait for apiserver process to appear ...
	I1027 23:27:17.241124 1372118 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:27:17.241138 1372118 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:27:17.243741 1372118 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-790322 addons enable metrics-server
	
	I1027 23:27:17.256320 1372118 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:27:17.257988 1372118 api_server.go:141] control plane version: v1.34.1
	I1027 23:27:17.258012 1372118 api_server.go:131] duration metric: took 16.88182ms to wait for apiserver health ...
	I1027 23:27:17.258022 1372118 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:27:17.262230 1372118 system_pods.go:59] 8 kube-system pods found
	I1027 23:27:17.262268 1372118 system_pods.go:61] "coredns-66bc5c9577-7czsv" [2949488f-bf74-4218-b480-955908b58ac0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:17.262278 1372118 system_pods.go:61] "etcd-embed-certs-790322" [592926b2-df2b-407d-8c86-931a4162bdd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:27:17.262284 1372118 system_pods.go:61] "kindnet-l2rcj" [c50bbe3e-12b4-4007-aa20-dfd1b04d38aa] Running
	I1027 23:27:17.262291 1372118 system_pods.go:61] "kube-apiserver-embed-certs-790322" [3839b875-fa30-4534-b042-37b5493241ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:27:17.262299 1372118 system_pods.go:61] "kube-controller-manager-embed-certs-790322" [ebf1417a-4c48-4950-9e6b-85d4856dc0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:27:17.262304 1372118 system_pods.go:61] "kube-proxy-7lwt5" [5d8f2c0d-30b5-487c-9d9e-e7be86b3be39] Running
	I1027 23:27:17.262312 1372118 system_pods.go:61] "kube-scheduler-embed-certs-790322" [cd6b90e4-d691-4163-815e-56ff72e4ba2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:27:17.262325 1372118 system_pods.go:61] "storage-provisioner" [2d42c557-cbb9-445c-8bd8-7b481a959c11] Running
	I1027 23:27:17.262331 1372118 system_pods.go:74] duration metric: took 4.302994ms to wait for pod list to return data ...
	I1027 23:27:17.262339 1372118 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:27:17.264424 1372118 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:27:17.265670 1372118 default_sa.go:45] found service account: "default"
	I1027 23:27:17.265691 1372118 default_sa.go:55] duration metric: took 3.341528ms for default service account to be created ...
	I1027 23:27:17.265700 1372118 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:27:17.267823 1372118 addons.go:514] duration metric: took 9.397513282s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:27:17.269731 1372118 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:17.269763 1372118 system_pods.go:89] "coredns-66bc5c9577-7czsv" [2949488f-bf74-4218-b480-955908b58ac0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:17.269773 1372118 system_pods.go:89] "etcd-embed-certs-790322" [592926b2-df2b-407d-8c86-931a4162bdd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:27:17.269807 1372118 system_pods.go:89] "kindnet-l2rcj" [c50bbe3e-12b4-4007-aa20-dfd1b04d38aa] Running
	I1027 23:27:17.269816 1372118 system_pods.go:89] "kube-apiserver-embed-certs-790322" [3839b875-fa30-4534-b042-37b5493241ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:27:17.269827 1372118 system_pods.go:89] "kube-controller-manager-embed-certs-790322" [ebf1417a-4c48-4950-9e6b-85d4856dc0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:27:17.269833 1372118 system_pods.go:89] "kube-proxy-7lwt5" [5d8f2c0d-30b5-487c-9d9e-e7be86b3be39] Running
	I1027 23:27:17.269839 1372118 system_pods.go:89] "kube-scheduler-embed-certs-790322" [cd6b90e4-d691-4163-815e-56ff72e4ba2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:27:17.269844 1372118 system_pods.go:89] "storage-provisioner" [2d42c557-cbb9-445c-8bd8-7b481a959c11] Running
	I1027 23:27:17.269854 1372118 system_pods.go:126] duration metric: took 4.147832ms to wait for k8s-apps to be running ...
	I1027 23:27:17.269890 1372118 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:27:17.269953 1372118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:27:17.285105 1372118 system_svc.go:56] duration metric: took 15.215681ms WaitForService to wait for kubelet
	I1027 23:27:17.285132 1372118 kubeadm.go:587] duration metric: took 9.415111469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:27:17.285152 1372118 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:27:17.288591 1372118 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:27:17.288620 1372118 node_conditions.go:123] node cpu capacity is 2
	I1027 23:27:17.288631 1372118 node_conditions.go:105] duration metric: took 3.474913ms to run NodePressure ...
	I1027 23:27:17.288644 1372118 start.go:242] waiting for startup goroutines ...
	I1027 23:27:17.288651 1372118 start.go:247] waiting for cluster config update ...
	I1027 23:27:17.288662 1372118 start.go:256] writing updated cluster config ...
	I1027 23:27:17.288954 1372118 ssh_runner.go:195] Run: rm -f paused
	I1027 23:27:17.293358 1372118 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:17.297645 1372118 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7czsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:17.069995 1369496 addons.go:514] duration metric: took 2.43947725s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:27:17.445817 1369496 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-336451" context rescaled to 1 replicas
	W1027 23:27:18.944917 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:19.303525 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:21.303757 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:20.944970 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:23.444340 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:25.444545 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:23.303865 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:25.305363 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:27.944636 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:29.945351 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:27.802993 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:29.805442 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:32.303094 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:31.945833 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:34.443546 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:34.303156 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:36.303987 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:36.444401 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:38.945276 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:38.803141 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:40.807249 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:40.946308 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:43.443932 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:45.444057 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:43.304281 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:45.315142 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:47.444601 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:49.944862 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:47.803124 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:49.803899 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:52.302643 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:51.951303 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:54.444066 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	W1027 23:27:54.303440 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	W1027 23:27:56.804763 1372118 pod_ready.go:104] pod "coredns-66bc5c9577-7czsv" is not "Ready", error: <nil>
	I1027 23:27:57.303397 1372118 pod_ready.go:94] pod "coredns-66bc5c9577-7czsv" is "Ready"
	I1027 23:27:57.303428 1372118 pod_ready.go:86] duration metric: took 40.005747477s for pod "coredns-66bc5c9577-7czsv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.306074 1372118 pod_ready.go:83] waiting for pod "etcd-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.310979 1372118 pod_ready.go:94] pod "etcd-embed-certs-790322" is "Ready"
	I1027 23:27:57.311008 1372118 pod_ready.go:86] duration metric: took 4.906875ms for pod "etcd-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.313335 1372118 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.317784 1372118 pod_ready.go:94] pod "kube-apiserver-embed-certs-790322" is "Ready"
	I1027 23:27:57.317811 1372118 pod_ready.go:86] duration metric: took 4.447226ms for pod "kube-apiserver-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.320275 1372118 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.501919 1372118 pod_ready.go:94] pod "kube-controller-manager-embed-certs-790322" is "Ready"
	I1027 23:27:57.501951 1372118 pod_ready.go:86] duration metric: took 181.642312ms for pod "kube-controller-manager-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:57.702272 1372118 pod_ready.go:83] waiting for pod "kube-proxy-7lwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.101593 1372118 pod_ready.go:94] pod "kube-proxy-7lwt5" is "Ready"
	I1027 23:27:58.101632 1372118 pod_ready.go:86] duration metric: took 399.333918ms for pod "kube-proxy-7lwt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.302030 1372118 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.702130 1372118 pod_ready.go:94] pod "kube-scheduler-embed-certs-790322" is "Ready"
	I1027 23:27:58.702156 1372118 pod_ready.go:86] duration metric: took 400.098647ms for pod "kube-scheduler-embed-certs-790322" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.702169 1372118 pod_ready.go:40] duration metric: took 41.408773009s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:58.771969 1372118 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:27:58.775340 1372118 out.go:179] * Done! kubectl is now configured to use "embed-certs-790322" cluster and "default" namespace by default
	W1027 23:27:56.944057 1369496 node_ready.go:57] node "default-k8s-diff-port-336451" has "Ready":"False" status (will retry)
	I1027 23:27:57.453799 1369496 node_ready.go:49] node "default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:57.453832 1369496 node_ready.go:38] duration metric: took 40.512898119s for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:27:57.453846 1369496 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:27:57.453908 1369496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:27:57.472544 1369496 api_server.go:72] duration metric: took 42.842462718s to wait for apiserver process to appear ...
	I1027 23:27:57.472572 1369496 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:27:57.472601 1369496 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1027 23:27:57.481723 1369496 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1027 23:27:57.482839 1369496 api_server.go:141] control plane version: v1.34.1
	I1027 23:27:57.482868 1369496 api_server.go:131] duration metric: took 10.289376ms to wait for apiserver health ...
	I1027 23:27:57.482876 1369496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:27:57.485982 1369496 system_pods.go:59] 8 kube-system pods found
	I1027 23:27:57.486032 1369496 system_pods.go:61] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.486041 1369496 system_pods.go:61] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.486050 1369496 system_pods.go:61] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.486055 1369496 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.486060 1369496 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.486065 1369496 system_pods.go:61] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.486070 1369496 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.486077 1369496 system_pods.go:61] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.486088 1369496 system_pods.go:74] duration metric: took 3.206486ms to wait for pod list to return data ...
	I1027 23:27:57.486097 1369496 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:27:57.488683 1369496 default_sa.go:45] found service account: "default"
	I1027 23:27:57.488755 1369496 default_sa.go:55] duration metric: took 2.651861ms for default service account to be created ...
	I1027 23:27:57.488771 1369496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:27:57.491648 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:57.491685 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.491692 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.491698 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.491705 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.491709 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.491714 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.491718 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.491724 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.491744 1369496 retry.go:31] will retry after 216.8039ms: missing components: kube-dns
	I1027 23:27:57.712499 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:57.712534 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:57.712541 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:57.712547 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:57.712552 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:57.712556 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:57.712569 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:57.712581 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:57.712591 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:57.712606 1369496 retry.go:31] will retry after 332.328897ms: missing components: kube-dns
	I1027 23:27:58.048510 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:58.048549 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:27:58.048555 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:58.048583 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:58.048595 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:58.048600 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:58.048605 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:58.048609 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:58.048621 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:27:58.048638 1369496 retry.go:31] will retry after 460.922768ms: missing components: kube-dns
	I1027 23:27:58.514497 1369496 system_pods.go:86] 8 kube-system pods found
	I1027 23:27:58.514528 1369496 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Running
	I1027 23:27:58.514536 1369496 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:27:58.514541 1369496 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running
	I1027 23:27:58.514568 1369496 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running
	I1027 23:27:58.514583 1369496 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running
	I1027 23:27:58.514587 1369496 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running
	I1027 23:27:58.514591 1369496 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running
	I1027 23:27:58.514596 1369496 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Running
	I1027 23:27:58.514604 1369496 system_pods.go:126] duration metric: took 1.025828047s to wait for k8s-apps to be running ...
	I1027 23:27:58.514615 1369496 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:27:58.514685 1369496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:27:58.527910 1369496 system_svc.go:56] duration metric: took 13.284355ms WaitForService to wait for kubelet
	I1027 23:27:58.527991 1369496 kubeadm.go:587] duration metric: took 43.897912924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:27:58.528022 1369496 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:27:58.530975 1369496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:27:58.531012 1369496 node_conditions.go:123] node cpu capacity is 2
	I1027 23:27:58.531026 1369496 node_conditions.go:105] duration metric: took 2.998065ms to run NodePressure ...
	I1027 23:27:58.531040 1369496 start.go:242] waiting for startup goroutines ...
	I1027 23:27:58.531047 1369496 start.go:247] waiting for cluster config update ...
	I1027 23:27:58.531058 1369496 start.go:256] writing updated cluster config ...
	I1027 23:27:58.531349 1369496 ssh_runner.go:195] Run: rm -f paused
	I1027 23:27:58.535071 1369496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:27:58.540137 1369496 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.544988 1369496 pod_ready.go:94] pod "coredns-66bc5c9577-lzssb" is "Ready"
	I1027 23:27:58.545018 1369496 pod_ready.go:86] duration metric: took 4.849939ms for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.547774 1369496 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.560603 1369496 pod_ready.go:94] pod "etcd-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.560631 1369496 pod_ready.go:86] duration metric: took 12.829505ms for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.563118 1369496 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.567963 1369496 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.567990 1369496 pod_ready.go:86] duration metric: took 4.84856ms for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.570520 1369496 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:58.942942 1369496 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-336451" is "Ready"
	I1027 23:27:58.942969 1369496 pod_ready.go:86] duration metric: took 372.417831ms for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.142563 1369496 pod_ready.go:83] waiting for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.540641 1369496 pod_ready.go:94] pod "kube-proxy-n4vzn" is "Ready"
	I1027 23:27:59.540665 1369496 pod_ready.go:86] duration metric: took 398.079189ms for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:27:59.741260 1369496 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:28:00.173655 1369496 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-336451" is "Ready"
	I1027 23:28:00.173689 1369496 pod_ready.go:86] duration metric: took 432.399523ms for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:28:00.173703 1369496 pod_ready.go:40] duration metric: took 1.638599587s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:28:00.365146 1369496 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:28:00.384228 1369496 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-336451" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:27:57 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:27:57.644165953Z" level=info msg="Created container b6cdeef55eb4578279a470a5d9ef6b31cf0690840c765201b84c33227ccca273: kube-system/coredns-66bc5c9577-lzssb/coredns" id=4d94ac2f-543e-4ce9-8238-ae8f79716f76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:27:57 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:27:57.645292147Z" level=info msg="Starting container: b6cdeef55eb4578279a470a5d9ef6b31cf0690840c765201b84c33227ccca273" id=76b7821c-d329-4246-92a7-c240b9e9ad76 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:27:57 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:27:57.649274841Z" level=info msg="Started container" PID=1729 containerID=b6cdeef55eb4578279a470a5d9ef6b31cf0690840c765201b84c33227ccca273 description=kube-system/coredns-66bc5c9577-lzssb/coredns id=76b7821c-d329-4246-92a7-c240b9e9ad76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2723bda0fdf44d079c7512ad5946fa51c3daec998c150a8123abc6c83f8cce49
	Oct 27 23:28:00 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:00.999530449Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6c0f2107-ea4c-4fd8-bcd5-dca88e1c968f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:28:00 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:00.999601474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.005129896Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4adb6e4adc4323656da6f23c71fa0901394d5419144b75304c913f47ed3b1d12 UID:4e6e40f3-3676-46f6-b448-f5622cc908a9 NetNS:/var/run/netns/19a114ec-8671-4d67-9163-dca76335940b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792e8}] Aliases:map[]}"
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.005340188Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.016869089Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4adb6e4adc4323656da6f23c71fa0901394d5419144b75304c913f47ed3b1d12 UID:4e6e40f3-3676-46f6-b448-f5622cc908a9 NetNS:/var/run/netns/19a114ec-8671-4d67-9163-dca76335940b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000792e8}] Aliases:map[]}"
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.017019876Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.020657203Z" level=info msg="Ran pod sandbox 4adb6e4adc4323656da6f23c71fa0901394d5419144b75304c913f47ed3b1d12 with infra container: default/busybox/POD" id=6c0f2107-ea4c-4fd8-bcd5-dca88e1c968f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.02333423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=77f49525-0577-4133-aa8a-0fbf6f13ecdc name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.023476435Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=77f49525-0577-4133-aa8a-0fbf6f13ecdc name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.023525248Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=77f49525-0577-4133-aa8a-0fbf6f13ecdc name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.025364855Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4ec443a5-de91-45a6-986a-27237be468e9 name=/runtime.v1.ImageService/PullImage
	Oct 27 23:28:01 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:01.027809509Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.179092762Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4ec443a5-de91-45a6-986a-27237be468e9 name=/runtime.v1.ImageService/PullImage
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.180292664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=861b2a27-0fa0-4577-b614-d607059b096f name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.182600561Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a4954b4d-4fb6-42c8-8d41-bc77dcb54f95 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.1881135Z" level=info msg="Creating container: default/busybox/busybox" id=c1266259-11a0-4896-aff3-329f07212666 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.188242651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.193033217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.193506922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.210068639Z" level=info msg="Created container c6241e62149928477f2b4e90f5ec56380dd9ed73e47b0fb0085d8d389f5abf76: default/busybox/busybox" id=c1266259-11a0-4896-aff3-329f07212666 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.211103846Z" level=info msg="Starting container: c6241e62149928477f2b4e90f5ec56380dd9ed73e47b0fb0085d8d389f5abf76" id=785d6d49-6836-4595-9248-b84c9039e420 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:28:03 default-k8s-diff-port-336451 crio[838]: time="2025-10-27T23:28:03.213837111Z" level=info msg="Started container" PID=1784 containerID=c6241e62149928477f2b4e90f5ec56380dd9ed73e47b0fb0085d8d389f5abf76 description=default/busybox/busybox id=785d6d49-6836-4595-9248-b84c9039e420 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4adb6e4adc4323656da6f23c71fa0901394d5419144b75304c913f47ed3b1d12
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c6241e6214992       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   4adb6e4adc432       busybox                                                default
	b6cdeef55eb45       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   2723bda0fdf44       coredns-66bc5c9577-lzssb                               kube-system
	22184550b3669       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   b0b00793e121b       storage-provisioner                                    kube-system
	a0b29e77aac4c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   79a1dd4d26613       kindnet-ht7mm                                          kube-system
	732956a81aa5e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   82e08be2c5327       kube-proxy-n4vzn                                       kube-system
	3ad6017762730       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   ecb5e2e327aa0       kube-scheduler-default-k8s-diff-port-336451            kube-system
	97593cbd47016       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   c39c289ea3526       etcd-default-k8s-diff-port-336451                      kube-system
	e308de73aae9c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   d42fcf8409862       kube-apiserver-default-k8s-diff-port-336451            kube-system
	308d078792ad2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   6b5f82a41f0dd       kube-controller-manager-default-k8s-diff-port-336451   kube-system
	
	
	==> coredns [b6cdeef55eb4578279a470a5d9ef6b31cf0690840c765201b84c33227ccca273] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59485 - 9659 "HINFO IN 1515192164013127736.7807086105516394593. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014076341s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-336451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-336451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=default-k8s-diff-port-336451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_27_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:27:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-336451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:28:11 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:28:11 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:28:11 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:28:11 +0000   Mon, 27 Oct 2025 23:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-336451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b39d5467-41ea-430a-8620-2c79f46d3819
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-lzssb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-default-k8s-diff-port-336451                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         65s
	  kube-system                 kindnet-ht7mm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-336451             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-336451    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-n4vzn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-336451             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-336451 event: Registered Node default-k8s-diff-port-336451 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-336451 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct27 23:02] overlayfs: idmapped layers are currently not supported
	[Oct27 23:03] overlayfs: idmapped layers are currently not supported
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97593cbd470169714ad1ae0d2bd2ed2d4603d6ebfa1ff9cde61d05ee63d8988a] <==
	{"level":"warn","ts":"2025-10-27T23:27:03.711516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.731367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.760601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.774515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.854690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.856949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.879479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.906859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.943236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.979273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:03.993697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.009857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.047002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.079022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.101490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.120174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.136243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.162215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.176835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.205508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.236638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.262883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.284835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:27:04.410703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36758","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T23:27:15.919312Z","caller":"traceutil/trace.go:172","msg":"trace[1238514186] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"116.342312ms","start":"2025-10-27T23:27:15.802931Z","end":"2025-10-27T23:27:15.919273Z","steps":["trace[1238514186] 'process raft request'  (duration: 95.678777ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:28:12 up  6:10,  0 user,  load average: 3.42, 3.97, 3.39
	Linux default-k8s-diff-port-336451 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0b29e77aac4c33777241c60376f2986ed23df2edd80eff39f1b1d931794fb97] <==
	I1027 23:27:16.526364       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:27:16.526641       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:27:16.526769       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:27:16.526781       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:27:16.526794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:27:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:27:16.724457       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:27:16.732210       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:27:16.732306       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:27:16.732501       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:27:46.725461       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:27:46.732978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:27:46.733230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 23:27:46.735494       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1027 23:27:48.332936       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:27:48.332970       1 metrics.go:72] Registering metrics
	I1027 23:27:48.333063       1 controller.go:711] "Syncing nftables rules"
	I1027 23:27:56.728026       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:27:56.728181       1 main.go:301] handling current node
	I1027 23:28:06.724030       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:28:06.724062       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e308de73aae9c66ec80438a3e7417584dddd7ef5fbddb057c7ae211017ea818c] <==
	I1027 23:27:05.933406       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1027 23:27:05.933575       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:27:05.933863       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:27:06.025878       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:27:06.025935       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 23:27:06.055964       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:27:06.065101       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:27:06.065855       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:27:06.543541       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 23:27:06.571425       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 23:27:06.571463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:27:08.260833       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:27:08.341646       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:27:08.426858       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 23:27:08.437164       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 23:27:08.438742       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:27:08.447128       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:27:09.042802       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:27:09.540858       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:27:09.573139       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 23:27:09.589818       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 23:27:15.289388       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 23:27:15.304229       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:27:15.395320       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:27:15.506077       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [308d078792ad2d513c69c002811076c4262d1357a49261ec0eef9bc9f2469bab] <==
	I1027 23:27:14.151079       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-336451"
	I1027 23:27:14.151126       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 23:27:14.151163       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 23:27:14.151332       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:27:14.151368       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 23:27:14.151388       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 23:27:14.151416       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:27:14.153616       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:27:14.160390       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:27:14.163683       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:27:14.164127       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:27:14.177281       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:27:14.182518       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-336451" podCIDRs=["10.244.0.0/24"]
	I1027 23:27:14.191416       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 23:27:14.196525       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:27:14.196915       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:27:14.199679       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:27:14.207676       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 23:27:14.208188       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:27:14.208399       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:27:14.208592       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:27:14.208609       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:27:14.208616       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:27:14.266228       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:27:59.158045       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [732956a81aa5e3dd3817ac3406d140f5d678aa3dfdbb378c6453468ab0e23ed6] <==
	I1027 23:27:16.349344       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:27:16.465224       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:27:16.566465       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:27:16.566496       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:27:16.566585       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:27:16.643526       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:27:16.643578       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:27:16.649873       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:27:16.650233       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:27:16.650244       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:27:16.651723       1 config.go:200] "Starting service config controller"
	I1027 23:27:16.651734       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:27:16.651750       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:27:16.657152       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:27:16.657192       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:27:16.657197       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:27:16.657887       1 config.go:309] "Starting node config controller"
	I1027 23:27:16.657895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:27:16.657901       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:27:16.755274       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:27:16.760806       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:27:16.760849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ad6017762730506427e7a8a1a5dcaa48d7bc2cf2c25acf55409c4dba792ada5] <==
	I1027 23:27:07.528293       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:27:07.545321       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:27:07.545441       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:27:07.545466       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:27:07.545483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 23:27:07.572941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 23:27:07.573039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 23:27:07.573151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 23:27:07.573187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:27:07.573225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 23:27:07.573277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 23:27:07.573312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 23:27:07.573373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 23:27:07.589596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:27:07.594502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:27:07.595291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 23:27:07.595386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:27:07.595453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:27:07.595496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 23:27:07.595554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 23:27:07.595616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 23:27:07.595694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 23:27:07.595759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:27:07.595803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1027 23:27:08.745596       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:27:10 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:10.242773    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d44af1929f0370b7f766cba625c2162-usr-local-share-ca-certificates\") pod \"kube-apiserver-default-k8s-diff-port-336451\" (UID: \"5d44af1929f0370b7f766cba625c2162\") " pod="kube-system/kube-apiserver-default-k8s-diff-port-336451"
	Oct 27 23:27:10 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:10.750095    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-336451" podStartSLOduration=0.750075144 podStartE2EDuration="750.075144ms" podCreationTimestamp="2025-10-27 23:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:27:10.205342145 +0000 UTC m=+0.744972918" watchObservedRunningTime="2025-10-27 23:27:10.750075144 +0000 UTC m=+1.289705893"
	Oct 27 23:27:14 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:14.264652    1303 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 23:27:14 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:14.265263    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.613210    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/883449ce-dcf8-47d7-8f93-9fc7612cf7a1-kube-proxy\") pod \"kube-proxy-n4vzn\" (UID: \"883449ce-dcf8-47d7-8f93-9fc7612cf7a1\") " pod="kube-system/kube-proxy-n4vzn"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.613269    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd29\" (UniqueName: \"kubernetes.io/projected/883449ce-dcf8-47d7-8f93-9fc7612cf7a1-kube-api-access-mqd29\") pod \"kube-proxy-n4vzn\" (UID: \"883449ce-dcf8-47d7-8f93-9fc7612cf7a1\") " pod="kube-system/kube-proxy-n4vzn"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.613294    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/883449ce-dcf8-47d7-8f93-9fc7612cf7a1-lib-modules\") pod \"kube-proxy-n4vzn\" (UID: \"883449ce-dcf8-47d7-8f93-9fc7612cf7a1\") " pod="kube-system/kube-proxy-n4vzn"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.613327    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/883449ce-dcf8-47d7-8f93-9fc7612cf7a1-xtables-lock\") pod \"kube-proxy-n4vzn\" (UID: \"883449ce-dcf8-47d7-8f93-9fc7612cf7a1\") " pod="kube-system/kube-proxy-n4vzn"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.810971    1303 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.816224    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/972ca641-7980-4167-9478-45795128282d-lib-modules\") pod \"kindnet-ht7mm\" (UID: \"972ca641-7980-4167-9478-45795128282d\") " pod="kube-system/kindnet-ht7mm"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.816333    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/972ca641-7980-4167-9478-45795128282d-cni-cfg\") pod \"kindnet-ht7mm\" (UID: \"972ca641-7980-4167-9478-45795128282d\") " pod="kube-system/kindnet-ht7mm"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.816415    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/972ca641-7980-4167-9478-45795128282d-xtables-lock\") pod \"kindnet-ht7mm\" (UID: \"972ca641-7980-4167-9478-45795128282d\") " pod="kube-system/kindnet-ht7mm"
	Oct 27 23:27:15 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:15.816498    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fztlj\" (UniqueName: \"kubernetes.io/projected/972ca641-7980-4167-9478-45795128282d-kube-api-access-fztlj\") pod \"kindnet-ht7mm\" (UID: \"972ca641-7980-4167-9478-45795128282d\") " pod="kube-system/kindnet-ht7mm"
	Oct 27 23:27:16 default-k8s-diff-port-336451 kubelet[1303]: W1027 23:27:16.287727    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/crio-79a1dd4d26613c9df168db47e5dc288d644b348436c2d88d803efd2f4ee7a363 WatchSource:0}: Error finding container 79a1dd4d26613c9df168db47e5dc288d644b348436c2d88d803efd2f4ee7a363: Status 404 returned error can't find the container with id 79a1dd4d26613c9df168db47e5dc288d644b348436c2d88d803efd2f4ee7a363
	Oct 27 23:27:17 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:17.123312    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n4vzn" podStartSLOduration=2.1232928 podStartE2EDuration="2.1232928s" podCreationTimestamp="2025-10-27 23:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:27:17.061744365 +0000 UTC m=+7.601375122" watchObservedRunningTime="2025-10-27 23:27:17.1232928 +0000 UTC m=+7.662923548"
	Oct 27 23:27:17 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:17.897777    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ht7mm" podStartSLOduration=2.8977596930000002 podStartE2EDuration="2.897759693s" podCreationTimestamp="2025-10-27 23:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:27:17.127303449 +0000 UTC m=+7.666934214" watchObservedRunningTime="2025-10-27 23:27:17.897759693 +0000 UTC m=+8.437390441"
	Oct 27 23:27:57 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:57.172579    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 23:27:57 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:57.329189    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/376c0c54-0b9b-47ed-a3c0-d74fcdf0c102-tmp\") pod \"storage-provisioner\" (UID: \"376c0c54-0b9b-47ed-a3c0-d74fcdf0c102\") " pod="kube-system/storage-provisioner"
	Oct 27 23:27:57 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:57.329239    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs8hg\" (UniqueName: \"kubernetes.io/projected/376c0c54-0b9b-47ed-a3c0-d74fcdf0c102-kube-api-access-zs8hg\") pod \"storage-provisioner\" (UID: \"376c0c54-0b9b-47ed-a3c0-d74fcdf0c102\") " pod="kube-system/storage-provisioner"
	Oct 27 23:27:57 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:57.329269    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb585899-022a-4a05-b73d-ab4ef8e7119a-config-volume\") pod \"coredns-66bc5c9577-lzssb\" (UID: \"cb585899-022a-4a05-b73d-ab4ef8e7119a\") " pod="kube-system/coredns-66bc5c9577-lzssb"
	Oct 27 23:27:57 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:57.329289    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l4tz\" (UniqueName: \"kubernetes.io/projected/cb585899-022a-4a05-b73d-ab4ef8e7119a-kube-api-access-2l4tz\") pod \"coredns-66bc5c9577-lzssb\" (UID: \"cb585899-022a-4a05-b73d-ab4ef8e7119a\") " pod="kube-system/coredns-66bc5c9577-lzssb"
	Oct 27 23:27:57 default-k8s-diff-port-336451 kubelet[1303]: W1027 23:27:57.538548    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/crio-b0b00793e121b060c215c322a9d094c41c14c1148e960576535b52f268baa2e3 WatchSource:0}: Error finding container b0b00793e121b060c215c322a9d094c41c14c1148e960576535b52f268baa2e3: Status 404 returned error can't find the container with id b0b00793e121b060c215c322a9d094c41c14c1148e960576535b52f268baa2e3
	Oct 27 23:27:58 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:58.154464    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lzssb" podStartSLOduration=43.154443837 podStartE2EDuration="43.154443837s" podCreationTimestamp="2025-10-27 23:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:27:58.134301605 +0000 UTC m=+48.673932362" watchObservedRunningTime="2025-10-27 23:27:58.154443837 +0000 UTC m=+48.694074594"
	Oct 27 23:27:58 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:27:58.174150    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.174128766 podStartE2EDuration="42.174128766s" podCreationTimestamp="2025-10-27 23:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:27:58.156421768 +0000 UTC m=+48.696052525" watchObservedRunningTime="2025-10-27 23:27:58.174128766 +0000 UTC m=+48.713759514"
	Oct 27 23:28:00 default-k8s-diff-port-336451 kubelet[1303]: I1027 23:28:00.863838    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qv5h\" (UniqueName: \"kubernetes.io/projected/4e6e40f3-3676-46f6-b448-f5622cc908a9-kube-api-access-5qv5h\") pod \"busybox\" (UID: \"4e6e40f3-3676-46f6-b448-f5622cc908a9\") " pod="default/busybox"
	
	
	==> storage-provisioner [22184550b3669232f3c7f564c37981fff4f7569c882c7b8867089d0d8d1a1113] <==
	I1027 23:27:57.624616       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:27:57.644511       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:27:57.644559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:27:57.654118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:57.667063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:27:57.667284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:27:57.669365       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336451_71ee3567-4a70-4793-b9e0-cff8fa56b203!
	W1027 23:27:57.677827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:27:57.683680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2176cbc4-0409-4665-84bd-c2de79a00ad7", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-336451_71ee3567-4a70-4793-b9e0-cff8fa56b203 became leader
	W1027 23:27:57.684020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:27:57.770145       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336451_71ee3567-4a70-4793-b9e0-cff8fa56b203!
	W1027 23:27:59.686916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:27:59.694002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:01.697006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:01.701627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.704830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:03.709506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:05.712947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:05.717476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:07.720382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:07.733927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:09.737790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:09.742747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:11.747051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:28:11.752723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-336451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (292.507337ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-852936
helpers_test.go:243: (dbg) docker inspect newest-cni-852936:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb",
	        "Created": "2025-10-27T23:28:26.049254307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1377523,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:28:26.16084967Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/hosts",
	        "LogPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb-json.log",
	        "Name": "/newest-cni-852936",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-852936:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-852936",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb",
	                "LowerDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/merged",
	                "UpperDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/diff",
	                "WorkDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-852936",
	                "Source": "/var/lib/docker/volumes/newest-cni-852936/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-852936",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-852936",
	                "name.minikube.sigs.k8s.io": "newest-cni-852936",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6a04a4d933a05e9aabe8a4fc3801adc9dc5bb75f693839bcb3760d44e65a135",
	            "SandboxKey": "/var/run/docker/netns/e6a04a4d933a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34594"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34598"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34596"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34597"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-852936": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:2e:f9:9e:6b:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cc9b34231316ca6e2b3bcce7977749e2a63825d24e6f604ea63947f22c91175",
	                    "EndpointID": "ffbd8ae41a3de4bd7c68c25de137eb556c9d551b92531104f625288d05e19b82",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-852936",
	                        "65a8d98d29dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936
E1027 23:29:05.215209 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-852936 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-852936 logs -n 25: (1.107614614s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-947754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │                     │
	│ stop    │ -p no-preload-947754 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:25 UTC │
	│ start   │ -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:25 UTC │ 27 Oct 25 23:26 UTC │
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-336451 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-336451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:28:26
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:28:26.520692 1377654 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:28:26.520968 1377654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:28:26.521000 1377654 out.go:374] Setting ErrFile to fd 2...
	I1027 23:28:26.521038 1377654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:28:26.521402 1377654 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:28:26.525738 1377654 out.go:368] Setting JSON to false
	I1027 23:28:26.527230 1377654 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22256,"bootTime":1761585451,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:28:26.527446 1377654 start.go:143] virtualization:  
	I1027 23:28:26.534667 1377654 out.go:179] * [default-k8s-diff-port-336451] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:28:26.539190 1377654 notify.go:221] Checking for updates...
	I1027 23:28:26.540306 1377654 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:28:26.543450 1377654 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:28:26.546511 1377654 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:28:26.550258 1377654 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:28:26.554904 1377654 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:28:26.558267 1377654 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:28:26.563830 1377654 config.go:182] Loaded profile config "default-k8s-diff-port-336451": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:28:26.564383 1377654 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:28:26.657172 1377654 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:28:26.657369 1377654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:28:26.810946 1377654 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-27 23:28:26.79728579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:28:26.811045 1377654 docker.go:318] overlay module found
	I1027 23:28:26.814697 1377654 out.go:179] * Using the docker driver based on existing profile
	I1027 23:28:26.819957 1377654 start.go:307] selected driver: docker
	I1027 23:28:26.819982 1377654 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-336451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-336451 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:28:26.820084 1377654 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:28:26.820717 1377654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:28:27.017998 1377654 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-27 23:28:27.000924372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:28:27.018342 1377654 start_flags.go:991] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:28:27.018367 1377654 cni.go:84] Creating CNI manager for ""
	I1027 23:28:27.019586 1377654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:28:27.019658 1377654 start.go:351] cluster config:
	{Name:default-k8s-diff-port-336451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-336451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:28:27.022949 1377654 out.go:179] * Starting "default-k8s-diff-port-336451" primary control-plane node in "default-k8s-diff-port-336451" cluster
	I1027 23:28:27.025704 1377654 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:28:27.028707 1377654 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:28:27.031672 1377654 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:28:27.031720 1377654 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:28:27.031759 1377654 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:28:27.031888 1377654 cache.go:59] Caching tarball of preloaded images
	I1027 23:28:27.031977 1377654 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:28:27.031985 1377654 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:28:27.032095 1377654 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/config.json ...
	I1027 23:28:27.083668 1377654 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:28:27.083688 1377654 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:28:27.083709 1377654 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:28:27.083732 1377654 start.go:360] acquireMachinesLock for default-k8s-diff-port-336451: {Name:mkecd163bf05ad01d249b2c36cade7dcbe62d611 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:28:27.083782 1377654 start.go:364] duration metric: took 32.887µs to acquireMachinesLock for "default-k8s-diff-port-336451"
	I1027 23:28:27.083802 1377654 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:28:27.083807 1377654 fix.go:55] fixHost starting: 
	I1027 23:28:27.084070 1377654 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:28:27.127433 1377654 fix.go:113] recreateIfNeeded on default-k8s-diff-port-336451: state=Stopped err=<nil>
	W1027 23:28:27.127470 1377654 fix.go:139] unexpected machine state, will restart: <nil>
	I1027 23:28:25.962329 1377042 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-852936:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.403155824s)
	I1027 23:28:25.962360 1377042 kic.go:203] duration metric: took 4.403294452s to extract preloaded images to volume ...
	W1027 23:28:25.962732 1377042 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 23:28:25.962838 1377042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 23:28:26.027054 1377042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-852936 --name newest-cni-852936 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-852936 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-852936 --network newest-cni-852936 --ip 192.168.85.2 --volume newest-cni-852936:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 23:28:26.468385 1377042 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Running}}
	I1027 23:28:26.514144 1377042 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:28:26.551196 1377042 cli_runner.go:164] Run: docker exec newest-cni-852936 stat /var/lib/dpkg/alternatives/iptables
	I1027 23:28:26.612956 1377042 oci.go:144] the created container "newest-cni-852936" has a running status.
	I1027 23:28:26.612998 1377042 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa...
	I1027 23:28:27.296425 1377042 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 23:28:27.320731 1377042 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:28:27.342515 1377042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 23:28:27.342535 1377042 kic_runner.go:114] Args: [docker exec --privileged newest-cni-852936 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 23:28:27.408551 1377042 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:28:27.428637 1377042 machine.go:94] provisionDockerMachine start ...
	I1027 23:28:27.428748 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:27.452050 1377042 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:27.452379 1377042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I1027 23:28:27.452389 1377042 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:28:27.453163 1377042 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48536->127.0.0.1:34594: read: connection reset by peer
	I1027 23:28:30.605985 1377042 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:28:30.606026 1377042 ubuntu.go:182] provisioning hostname "newest-cni-852936"
	I1027 23:28:30.606100 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:27.130869 1377654 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-336451" ...
	I1027 23:28:27.130961 1377654 cli_runner.go:164] Run: docker start default-k8s-diff-port-336451
	I1027 23:28:27.489932 1377654 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:28:27.512678 1377654 kic.go:430] container "default-k8s-diff-port-336451" state is running.
	I1027 23:28:27.513634 1377654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-336451
	I1027 23:28:27.537091 1377654 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/config.json ...
	I1027 23:28:27.537312 1377654 machine.go:94] provisionDockerMachine start ...
	I1027 23:28:27.537391 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:27.558880 1377654 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:27.559207 1377654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34599 <nil> <nil>}
	I1027 23:28:27.559225 1377654 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:28:27.560377 1377654 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 23:28:30.729991 1377654 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-336451
	
	I1027 23:28:30.730031 1377654 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-336451"
	I1027 23:28:30.730109 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:30.749516 1377654 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:30.749830 1377654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34599 <nil> <nil>}
	I1027 23:28:30.749847 1377654 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-336451 && echo "default-k8s-diff-port-336451" | sudo tee /etc/hostname
	I1027 23:28:30.922180 1377654 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-336451
	
	I1027 23:28:30.922272 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:30.939205 1377654 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:30.939508 1377654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34599 <nil> <nil>}
	I1027 23:28:30.939530 1377654 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-336451' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-336451/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-336451' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:28:31.103763 1377654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:28:31.103792 1377654 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:28:31.103824 1377654 ubuntu.go:190] setting up certificates
	I1027 23:28:31.103834 1377654 provision.go:84] configureAuth start
	I1027 23:28:31.103896 1377654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-336451
	I1027 23:28:31.134962 1377654 provision.go:143] copyHostCerts
	I1027 23:28:31.135038 1377654 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:28:31.135055 1377654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:28:31.135117 1377654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:28:31.135224 1377654 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:28:31.135229 1377654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:28:31.135250 1377654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:28:31.135316 1377654 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:28:31.135321 1377654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:28:31.135340 1377654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:28:31.135394 1377654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-336451 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-336451 localhost minikube]
	I1027 23:28:31.426046 1377654 provision.go:177] copyRemoteCerts
	I1027 23:28:31.426147 1377654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:28:31.426222 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:31.447025 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:30.628168 1377042 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:30.628490 1377042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I1027 23:28:30.628510 1377042 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852936 && echo "newest-cni-852936" | sudo tee /etc/hostname
	I1027 23:28:30.793374 1377042 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:28:30.793471 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:30.814651 1377042 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:30.814965 1377042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I1027 23:28:30.814989 1377042 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852936/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:28:30.966553 1377042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:28:30.966580 1377042 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:28:30.966606 1377042 ubuntu.go:190] setting up certificates
	I1027 23:28:30.966616 1377042 provision.go:84] configureAuth start
	I1027 23:28:30.966675 1377042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:28:30.989768 1377042 provision.go:143] copyHostCerts
	I1027 23:28:30.989834 1377042 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:28:30.989848 1377042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:28:30.994761 1377042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:28:30.994928 1377042 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:28:30.994943 1377042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:28:30.994982 1377042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:28:30.995051 1377042 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:28:30.995061 1377042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:28:30.995088 1377042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:28:30.995152 1377042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852936 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852936]
	I1027 23:28:31.279368 1377042 provision.go:177] copyRemoteCerts
	I1027 23:28:31.279483 1377042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:28:31.279540 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:31.297543 1377042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:28:31.402772 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:28:31.422979 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:28:31.446960 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:28:31.467263 1377042 provision.go:87] duration metric: took 500.623868ms to configureAuth
	I1027 23:28:31.467293 1377042 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:28:31.467485 1377042 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:28:31.467596 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:31.486296 1377042 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:31.486634 1377042 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34594 <nil> <nil>}
	I1027 23:28:31.486657 1377042 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:28:31.771979 1377042 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:28:31.772007 1377042 machine.go:97] duration metric: took 4.343348535s to provisionDockerMachine
	I1027 23:28:31.772017 1377042 client.go:176] duration metric: took 10.914257973s to LocalClient.Create
	I1027 23:28:31.772032 1377042 start.go:167] duration metric: took 10.914332788s to libmachine.API.Create "newest-cni-852936"
	I1027 23:28:31.772039 1377042 start.go:293] postStartSetup for "newest-cni-852936" (driver="docker")
	I1027 23:28:31.772049 1377042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:28:31.772131 1377042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:28:31.772171 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:31.789426 1377042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:28:31.899398 1377042 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:28:31.903377 1377042 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:28:31.903410 1377042 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:28:31.903421 1377042 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:28:31.903477 1377042 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:28:31.903560 1377042 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:28:31.903673 1377042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:28:31.912057 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:28:31.933367 1377042 start.go:296] duration metric: took 161.313262ms for postStartSetup
	I1027 23:28:31.933727 1377042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:28:31.954024 1377042 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:28:31.954252 1377042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:28:31.954305 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:31.974568 1377042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:28:32.091283 1377042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:28:32.095927 1377042 start.go:128] duration metric: took 11.241826875s to createHost
	I1027 23:28:32.095956 1377042 start.go:83] releasing machines lock for "newest-cni-852936", held for 11.24198618s
	I1027 23:28:32.096029 1377042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:28:32.119047 1377042 ssh_runner.go:195] Run: cat /version.json
	I1027 23:28:32.119106 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:32.119357 1377042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:28:32.119417 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:28:32.156063 1377042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:28:32.168708 1377042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:28:32.270215 1377042 ssh_runner.go:195] Run: systemctl --version
	I1027 23:28:32.382100 1377042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:28:32.428988 1377042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:28:32.433952 1377042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:28:32.434034 1377042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:28:32.465936 1377042 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 23:28:32.465961 1377042 start.go:496] detecting cgroup driver to use...
	I1027 23:28:32.465992 1377042 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:28:32.466059 1377042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:28:32.488188 1377042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:28:32.501776 1377042 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:28:32.501838 1377042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:28:32.519362 1377042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:28:32.539462 1377042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:28:32.690092 1377042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:28:32.869576 1377042 docker.go:234] disabling docker service ...
	I1027 23:28:32.869654 1377042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:28:32.894976 1377042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:28:32.910573 1377042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:28:33.057706 1377042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:28:33.230358 1377042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:28:33.255205 1377042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:28:33.270845 1377042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:28:33.270941 1377042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.280446 1377042 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:28:33.280569 1377042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.302910 1377042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.315668 1377042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.328975 1377042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:28:33.341782 1377042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.351341 1377042 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.364423 1377042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.373281 1377042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:28:33.380737 1377042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:28:33.387890 1377042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:28:33.549244 1377042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:28:33.709282 1377042 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:28:33.709354 1377042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:28:33.714285 1377042 start.go:564] Will wait 60s for crictl version
	I1027 23:28:33.714362 1377042 ssh_runner.go:195] Run: which crictl
	I1027 23:28:33.719909 1377042 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:28:33.749272 1377042 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:28:33.749355 1377042 ssh_runner.go:195] Run: crio --version
	I1027 23:28:33.789806 1377042 ssh_runner.go:195] Run: crio --version
	I1027 23:28:33.829840 1377042 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:28:33.832781 1377042 cli_runner.go:164] Run: docker network inspect newest-cni-852936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:28:33.852630 1377042 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:28:33.857186 1377042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:28:33.874455 1377042 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1027 23:28:31.555288 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:28:31.576828 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1027 23:28:31.597192 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:28:31.616712 1377654 provision.go:87] duration metric: took 512.845766ms to configureAuth
	I1027 23:28:31.616738 1377654 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:28:31.616981 1377654 config.go:182] Loaded profile config "default-k8s-diff-port-336451": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:28:31.617137 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:31.634688 1377654 main.go:143] libmachine: Using SSH client type: native
	I1027 23:28:31.635005 1377654 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34599 <nil> <nil>}
	I1027 23:28:31.635028 1377654 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:28:31.977821 1377654 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:28:31.977843 1377654 machine.go:97] duration metric: took 4.440521648s to provisionDockerMachine
	I1027 23:28:31.977854 1377654 start.go:293] postStartSetup for "default-k8s-diff-port-336451" (driver="docker")
	I1027 23:28:31.977865 1377654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:28:31.977941 1377654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:28:31.977979 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:31.996575 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:32.115204 1377654 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:28:32.119864 1377654 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:28:32.119893 1377654 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:28:32.119904 1377654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:28:32.119959 1377654 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:28:32.120046 1377654 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:28:32.120152 1377654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:28:32.129010 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:28:32.159048 1377654 start.go:296] duration metric: took 181.1785ms for postStartSetup
	I1027 23:28:32.159139 1377654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:28:32.159181 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:32.184793 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:32.295713 1377654 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:28:32.301030 1377654 fix.go:57] duration metric: took 5.217215366s for fixHost
	I1027 23:28:32.301051 1377654 start.go:83] releasing machines lock for "default-k8s-diff-port-336451", held for 5.217261488s
	I1027 23:28:32.301118 1377654 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-336451
	I1027 23:28:32.319307 1377654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:28:32.319463 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:32.319520 1377654 ssh_runner.go:195] Run: cat /version.json
	I1027 23:28:32.319554 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:32.337553 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:32.362632 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:32.548802 1377654 ssh_runner.go:195] Run: systemctl --version
	I1027 23:28:32.556118 1377654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:28:32.616727 1377654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:28:32.621938 1377654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:28:32.622011 1377654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:28:32.638795 1377654 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:28:32.638821 1377654 start.go:496] detecting cgroup driver to use...
	I1027 23:28:32.638852 1377654 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:28:32.638920 1377654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:28:32.659472 1377654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:28:32.685046 1377654 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:28:32.685137 1377654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:28:32.707536 1377654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:28:32.723025 1377654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:28:32.871646 1377654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:28:33.052136 1377654 docker.go:234] disabling docker service ...
	I1027 23:28:33.052254 1377654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:28:33.072059 1377654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:28:33.087393 1377654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:28:33.246742 1377654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:28:33.405011 1377654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:28:33.417830 1377654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:28:33.452244 1377654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:28:33.452369 1377654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.466290 1377654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:28:33.466438 1377654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.476806 1377654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.487984 1377654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.497274 1377654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:28:33.505745 1377654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.515531 1377654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.524207 1377654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:28:33.537369 1377654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:28:33.545839 1377654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:28:33.554674 1377654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:28:33.700760 1377654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:28:33.862007 1377654 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:28:33.862085 1377654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:28:33.866141 1377654 start.go:564] Will wait 60s for crictl version
	I1027 23:28:33.866218 1377654 ssh_runner.go:195] Run: which crictl
	I1027 23:28:33.870802 1377654 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:28:33.904299 1377654 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:28:33.904379 1377654 ssh_runner.go:195] Run: crio --version
	I1027 23:28:33.945659 1377654 ssh_runner.go:195] Run: crio --version
	I1027 23:28:33.992735 1377654 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:28:33.877235 1377042 kubeadm.go:884] updating cluster {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:28:33.877373 1377042 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:28:33.877455 1377042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:28:33.928113 1377042 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:28:33.928139 1377042 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:28:33.928194 1377042 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:28:33.968680 1377042 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:28:33.968707 1377042 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:28:33.968715 1377042 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:28:33.968862 1377042 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-852936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:28:33.968975 1377042 ssh_runner.go:195] Run: crio config
	I1027 23:28:34.044565 1377042 cni.go:84] Creating CNI manager for ""
	I1027 23:28:34.044640 1377042 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:28:34.044674 1377042 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 23:28:34.044722 1377042 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-852936 NodeName:newest-cni-852936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:28:34.044870 1377042 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-852936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:28:34.044964 1377042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:28:34.057114 1377042 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:28:34.057231 1377042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:28:34.069205 1377042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:28:34.090346 1377042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:28:34.112577 1377042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 23:28:34.129228 1377042 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:28:34.133622 1377042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:28:34.143623 1377042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:28:34.302597 1377042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:28:34.328215 1377042 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936 for IP: 192.168.85.2
	I1027 23:28:34.328285 1377042 certs.go:195] generating shared ca certs ...
	I1027 23:28:34.328316 1377042 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:34.328498 1377042 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:28:34.328577 1377042 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:28:34.328615 1377042 certs.go:257] generating profile certs ...
	I1027 23:28:34.328707 1377042 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.key
	I1027 23:28:34.328740 1377042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.crt with IP's: []
	I1027 23:28:34.686692 1377042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.crt ...
	I1027 23:28:34.686762 1377042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.crt: {Name:mk3dc4fe8291393066d59e9309f7ee88f046bb1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:34.686981 1377042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.key ...
	I1027 23:28:34.687015 1377042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.key: {Name:mk1397493a8a124527026d6fd2d96485bf663141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:34.687148 1377042 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b
	I1027 23:28:34.687183 1377042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt.7d12570b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 23:28:35.060955 1377042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt.7d12570b ...
	I1027 23:28:35.060987 1377042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt.7d12570b: {Name:mkd2557aceb4c2e262e79dd47a0e9e35811d98b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:35.061237 1377042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b ...
	I1027 23:28:35.061256 1377042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b: {Name:mk86f7be0a0e062a40ab354b72d1c45e7eb0600f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:35.061394 1377042 certs.go:382] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt.7d12570b -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt
	I1027 23:28:35.061516 1377042 certs.go:386] copying /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b -> /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key
	I1027 23:28:35.061629 1377042 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key
	I1027 23:28:35.061662 1377042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt with IP's: []
	I1027 23:28:35.268256 1377042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt ...
	I1027 23:28:35.268290 1377042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt: {Name:mk2ad723de33a5361b30c48913c9c32b6cf5bf8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:35.271306 1377042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key ...
	I1027 23:28:35.271341 1377042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key: {Name:mkad24098c199f7b935e1913f98075cd286691e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:35.271627 1377042 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:28:35.271687 1377042 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:28:35.271702 1377042 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:28:35.271730 1377042 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:28:35.271776 1377042 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:28:35.271810 1377042 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:28:35.271886 1377042 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:28:35.272478 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:28:35.301559 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:28:35.344755 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:28:35.411034 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:28:35.445715 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:28:35.477961 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:28:35.509136 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:28:35.543704 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:28:35.578571 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:28:35.607853 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:28:33.995649 1377654 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-336451 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:28:34.018119 1377654 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 23:28:34.022709 1377654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:28:34.036885 1377654 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-336451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-336451 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:28:34.037021 1377654 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:28:34.037093 1377654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:28:34.084284 1377654 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:28:34.084368 1377654 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:28:34.084459 1377654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:28:34.119131 1377654 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:28:34.119152 1377654 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:28:34.119159 1377654 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1027 23:28:34.119257 1377654 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-336451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-336451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:28:34.119336 1377654 ssh_runner.go:195] Run: crio config
	I1027 23:28:34.192719 1377654 cni.go:84] Creating CNI manager for ""
	I1027 23:28:34.192744 1377654 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:28:34.192767 1377654 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 23:28:34.192790 1377654 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-336451 NodeName:default-k8s-diff-port-336451 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:28:34.192946 1377654 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-336451"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:28:34.193035 1377654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:28:34.205390 1377654 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:28:34.205471 1377654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:28:34.220342 1377654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1027 23:28:34.233500 1377654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:28:34.252517 1377654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1027 23:28:34.267066 1377654 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:28:34.271433 1377654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:28:34.281850 1377654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:28:34.481071 1377654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:28:34.499343 1377654 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451 for IP: 192.168.76.2
	I1027 23:28:34.499367 1377654 certs.go:195] generating shared ca certs ...
	I1027 23:28:34.499389 1377654 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:34.499538 1377654 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:28:34.499588 1377654 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:28:34.499600 1377654 certs.go:257] generating profile certs ...
	I1027 23:28:34.499717 1377654 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/client.key
	I1027 23:28:34.499807 1377654 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/apiserver.key.aeaa334c
	I1027 23:28:34.499868 1377654 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/proxy-client.key
	I1027 23:28:34.500022 1377654 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:28:34.500072 1377654 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:28:34.500176 1377654 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:28:34.500241 1377654 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:28:34.500282 1377654 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:28:34.500306 1377654 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:28:34.500372 1377654 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:28:34.501051 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:28:34.530546 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:28:34.572659 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:28:34.698375 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:28:34.755699 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1027 23:28:34.779925 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:28:34.812068 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:28:34.835508 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/default-k8s-diff-port-336451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:28:34.861025 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:28:34.883147 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:28:34.903257 1377654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:28:34.923620 1377654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:28:34.938241 1377654 ssh_runner.go:195] Run: openssl version
	I1027 23:28:34.945208 1377654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:28:34.955013 1377654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:28:34.959384 1377654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:28:34.959443 1377654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:28:35.004918 1377654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:28:35.015253 1377654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:28:35.025426 1377654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:28:35.030293 1377654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:28:35.030442 1377654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:28:35.077248 1377654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:28:35.087337 1377654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:28:35.098672 1377654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:28:35.118906 1377654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:28:35.119022 1377654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:28:35.189975 1377654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:28:35.223456 1377654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:28:35.230429 1377654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:28:35.355346 1377654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:28:35.427956 1377654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:28:35.597127 1377654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:28:35.681181 1377654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:28:35.805790 1377654 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:28:35.935234 1377654 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-336451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-336451 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:28:35.935328 1377654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:28:35.935388 1377654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:28:36.011370 1377654 cri.go:89] found id: "7f66ec5899883992c1749593bfd4630c3ce8244c7e186676fa13e99cb58e4a03"
	I1027 23:28:36.011400 1377654 cri.go:89] found id: "e042d7ccfe395ac64bbfa1b1099e7ff453e4d67df7754503aac635f0f8ba71a8"
	I1027 23:28:36.011404 1377654 cri.go:89] found id: "69c1f90555bd0a08896702d72889b7cbea6dc8f6bf3d24bcc9936a63461f070f"
	I1027 23:28:36.011408 1377654 cri.go:89] found id: "ee6b21c638763f9bea06ed3eb613912563fe107d49320d174cfb911c51258b74"
	I1027 23:28:36.011416 1377654 cri.go:89] found id: ""
	I1027 23:28:36.011467 1377654 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 23:28:36.037097 1377654 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:28:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:28:36.037182 1377654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:28:36.062724 1377654 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:28:36.062746 1377654 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:28:36.062795 1377654 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:28:36.078671 1377654 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:28:36.079105 1377654 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-336451" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:28:36.079213 1377654 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-336451" cluster setting kubeconfig missing "default-k8s-diff-port-336451" context setting]
	I1027 23:28:36.079524 1377654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:36.080957 1377654 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:28:36.102807 1377654 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 23:28:36.102842 1377654 kubeadm.go:602] duration metric: took 40.090171ms to restartPrimaryControlPlane
	I1027 23:28:36.102851 1377654 kubeadm.go:403] duration metric: took 167.626425ms to StartCluster
	I1027 23:28:36.102865 1377654 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:36.102923 1377654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:28:36.103646 1377654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:28:36.103852 1377654 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:28:36.104170 1377654 config.go:182] Loaded profile config "default-k8s-diff-port-336451": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:28:36.104219 1377654 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:28:36.104348 1377654 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-336451"
	I1027 23:28:36.104367 1377654 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-336451"
	W1027 23:28:36.104374 1377654 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:28:36.104396 1377654 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:28:36.105151 1377654 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:28:36.105331 1377654 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-336451"
	I1027 23:28:36.105355 1377654 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-336451"
	W1027 23:28:36.105362 1377654 addons.go:247] addon dashboard should already be in state true
	I1027 23:28:36.105387 1377654 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:28:36.105634 1377654 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-336451"
	I1027 23:28:36.105651 1377654 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-336451"
	I1027 23:28:36.105893 1377654 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:28:36.106326 1377654 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:28:36.110745 1377654 out.go:179] * Verifying Kubernetes components...
	I1027 23:28:36.113828 1377654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:28:36.163394 1377654 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-336451"
	W1027 23:28:36.163417 1377654 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:28:36.163444 1377654 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:28:36.163862 1377654 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:28:36.168913 1377654 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:28:36.174477 1377654 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:28:36.182530 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:28:36.182566 1377654 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:28:36.182635 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:36.206502 1377654 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:28:35.647614 1377042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:28:35.680310 1377042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:28:35.694739 1377042 ssh_runner.go:195] Run: openssl version
	I1027 23:28:35.701682 1377042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:28:35.710227 1377042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:28:35.714479 1377042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:28:35.714562 1377042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:28:35.770723 1377042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:28:35.779255 1377042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:28:35.788111 1377042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:28:35.792433 1377042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:28:35.792518 1377042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:28:35.845013 1377042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:28:35.853482 1377042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:28:35.861804 1377042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:28:35.866133 1377042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:28:35.866209 1377042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:28:35.910596 1377042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:28:35.919116 1377042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:28:35.923591 1377042 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 23:28:35.923652 1377042 kubeadm.go:401] StartCluster: {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:28:35.923734 1377042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:28:35.923813 1377042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:28:35.964531 1377042 cri.go:89] found id: ""
	I1027 23:28:35.964618 1377042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:28:35.974302 1377042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 23:28:35.982339 1377042 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1027 23:28:35.982456 1377042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 23:28:35.994729 1377042 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 23:28:35.994752 1377042 kubeadm.go:158] found existing configuration files:
	
	I1027 23:28:35.994814 1377042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 23:28:36.007518 1377042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 23:28:36.007613 1377042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 23:28:36.018554 1377042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 23:28:36.031209 1377042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 23:28:36.031289 1377042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 23:28:36.044586 1377042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 23:28:36.055961 1377042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 23:28:36.056042 1377042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 23:28:36.067184 1377042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 23:28:36.082412 1377042 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 23:28:36.082482 1377042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 23:28:36.097402 1377042 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 23:28:36.246194 1377042 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1027 23:28:36.246254 1377042 kubeadm.go:319] [preflight] Running pre-flight checks
	I1027 23:28:36.315368 1377042 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1027 23:28:36.315439 1377042 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 23:28:36.315475 1377042 kubeadm.go:319] OS: Linux
	I1027 23:28:36.315521 1377042 kubeadm.go:319] CGROUPS_CPU: enabled
	I1027 23:28:36.315569 1377042 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1027 23:28:36.315616 1377042 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1027 23:28:36.315664 1377042 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1027 23:28:36.315712 1377042 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1027 23:28:36.315767 1377042 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1027 23:28:36.315813 1377042 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1027 23:28:36.315861 1377042 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1027 23:28:36.315906 1377042 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1027 23:28:36.474636 1377042 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 23:28:36.474750 1377042 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 23:28:36.474846 1377042 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 23:28:36.486867 1377042 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 23:28:36.208078 1377654 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:28:36.208097 1377654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:28:36.208156 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:36.214510 1377654 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:28:36.214535 1377654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:28:36.214599 1377654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:28:36.234585 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:36.260973 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:36.261549 1377654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:28:36.490123 1377042 out.go:252]   - Generating certificates and keys ...
	I1027 23:28:36.490215 1377042 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1027 23:28:36.490281 1377042 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1027 23:28:37.149383 1377042 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 23:28:37.512845 1377042 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1027 23:28:37.917894 1377042 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1027 23:28:38.106594 1377042 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1027 23:28:38.446071 1377042 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1027 23:28:38.446667 1377042 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-852936] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:28:39.108759 1377042 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1027 23:28:39.114991 1377042 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-852936] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 23:28:39.475703 1377042 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 23:28:39.963075 1377042 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 23:28:36.528118 1377654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:28:36.628526 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:28:36.628558 1377654 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:28:36.740161 1377654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:28:36.756965 1377654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:28:36.766266 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:28:36.766293 1377654 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:28:36.867677 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:28:36.867708 1377654 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:28:36.950327 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:28:36.950353 1377654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:28:37.060191 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:28:37.060216 1377654 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:28:37.136281 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:28:37.136307 1377654 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:28:37.161124 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:28:37.161150 1377654 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:28:37.207807 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:28:37.207832 1377654 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:28:37.248271 1377654 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:28:37.248295 1377654 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:28:37.295817 1377654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:28:41.484699 1377042 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1027 23:28:41.485209 1377042 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 23:28:41.641227 1377042 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 23:28:41.999890 1377042 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 23:28:42.800465 1377042 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 23:28:43.083001 1377042 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 23:28:44.650289 1377042 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 23:28:44.651453 1377042 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 23:28:44.654430 1377042 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 23:28:43.046190 1377654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.518034725s)
	I1027 23:28:45.540191 1377654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.799993338s)
	I1027 23:28:45.540308 1377654 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.783307598s)
	I1027 23:28:45.540367 1377654 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:28:45.544202 1377654 node_ready.go:49] node "default-k8s-diff-port-336451" is "Ready"
	I1027 23:28:45.544285 1377654 node_ready.go:38] duration metric: took 3.883583ms for node "default-k8s-diff-port-336451" to be "Ready" ...
	I1027 23:28:45.544313 1377654 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:28:45.544406 1377654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:28:45.569544 1377654 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.273681398s)
	I1027 23:28:45.572972 1377654 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-336451 addons enable metrics-server
	
	I1027 23:28:45.576174 1377654 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1027 23:28:44.658424 1377042 out.go:252]   - Booting up control plane ...
	I1027 23:28:44.658541 1377042 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 23:28:44.658635 1377042 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 23:28:44.659712 1377042 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 23:28:44.694793 1377042 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 23:28:44.695386 1377042 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 23:28:44.705672 1377042 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 23:28:44.705781 1377042 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 23:28:44.705828 1377042 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1027 23:28:44.936184 1377042 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 23:28:44.936311 1377042 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 23:28:45.579188 1377654 addons.go:514] duration metric: took 9.474953434s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1027 23:28:45.607891 1377654 api_server.go:72] duration metric: took 9.503999987s to wait for apiserver process to appear ...
	I1027 23:28:45.607920 1377654 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:28:45.607943 1377654 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1027 23:28:45.630763 1377654 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1027 23:28:45.634239 1377654 api_server.go:141] control plane version: v1.34.1
	I1027 23:28:45.634272 1377654 api_server.go:131] duration metric: took 26.344082ms to wait for apiserver health ...
	I1027 23:28:45.634281 1377654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:28:45.638641 1377654 system_pods.go:59] 8 kube-system pods found
	I1027 23:28:45.638684 1377654 system_pods.go:61] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:28:45.638697 1377654 system_pods.go:61] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:28:45.638707 1377654 system_pods.go:61] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 23:28:45.638720 1377654 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:28:45.638727 1377654 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:28:45.638740 1377654 system_pods.go:61] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:28:45.638749 1377654 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:28:45.638760 1377654 system_pods.go:61] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:28:45.638766 1377654 system_pods.go:74] duration metric: took 4.476674ms to wait for pod list to return data ...
	I1027 23:28:45.638778 1377654 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:28:45.641917 1377654 default_sa.go:45] found service account: "default"
	I1027 23:28:45.641944 1377654 default_sa.go:55] duration metric: took 3.15947ms for default service account to be created ...
	I1027 23:28:45.641953 1377654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 23:28:45.645819 1377654 system_pods.go:86] 8 kube-system pods found
	I1027 23:28:45.645862 1377654 system_pods.go:89] "coredns-66bc5c9577-lzssb" [cb585899-022a-4a05-b73d-ab4ef8e7119a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 23:28:45.645879 1377654 system_pods.go:89] "etcd-default-k8s-diff-port-336451" [d2052799-8302-43e4-b2de-1ae7ecc5d073] Running
	I1027 23:28:45.645890 1377654 system_pods.go:89] "kindnet-ht7mm" [972ca641-7980-4167-9478-45795128282d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 23:28:45.645903 1377654 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-336451" [6c97a839-7855-4ce4-a15e-765781f00b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:28:45.645912 1377654 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-336451" [45c8bd93-e3d8-416f-9550-55eb28cef602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:28:45.645929 1377654 system_pods.go:89] "kube-proxy-n4vzn" [883449ce-dcf8-47d7-8f93-9fc7612cf7a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 23:28:45.645937 1377654 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-336451" [fd388522-944b-4447-a8db-8bfa05f722ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:28:45.645947 1377654 system_pods.go:89] "storage-provisioner" [376c0c54-0b9b-47ed-a3c0-d74fcdf0c102] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 23:28:45.645957 1377654 system_pods.go:126] duration metric: took 3.994657ms to wait for k8s-apps to be running ...
	I1027 23:28:45.645971 1377654 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 23:28:45.646055 1377654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:28:45.676918 1377654 system_svc.go:56] duration metric: took 30.926276ms WaitForService to wait for kubelet
	I1027 23:28:45.676949 1377654 kubeadm.go:587] duration metric: took 9.573062666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 23:28:45.676987 1377654 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:28:45.681672 1377654 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:28:45.681710 1377654 node_conditions.go:123] node cpu capacity is 2
	I1027 23:28:45.681732 1377654 node_conditions.go:105] duration metric: took 4.733285ms to run NodePressure ...
	I1027 23:28:45.681747 1377654 start.go:242] waiting for startup goroutines ...
	I1027 23:28:45.681759 1377654 start.go:247] waiting for cluster config update ...
	I1027 23:28:45.681770 1377654 start.go:256] writing updated cluster config ...
	I1027 23:28:45.682094 1377654 ssh_runner.go:195] Run: rm -f paused
	I1027 23:28:45.685888 1377654 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:28:45.690787 1377654 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:28:46.440956 1377042 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501792579s
	I1027 23:28:46.442166 1377042 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 23:28:46.442256 1377042 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 23:28:46.442529 1377042 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 23:28:46.442629 1377042 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1027 23:28:47.699141 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:28:50.203900 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:28:53.561804 1377042 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.118460919s
	I1027 23:28:54.912502 1377042 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.470245048s
	W1027 23:28:52.699038 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:28:55.197551 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:28:56.445283 1377042 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002426479s
	I1027 23:28:56.468862 1377042 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 23:28:56.493600 1377042 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 23:28:56.511928 1377042 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 23:28:56.512125 1377042 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-852936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 23:28:56.531902 1377042 kubeadm.go:319] [bootstrap-token] Using token: sau6o0.jgmkfp8yv3ipo3ir
	I1027 23:28:56.535064 1377042 out.go:252]   - Configuring RBAC rules ...
	I1027 23:28:56.535190 1377042 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 23:28:56.542235 1377042 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 23:28:56.552165 1377042 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 23:28:56.557557 1377042 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 23:28:56.562996 1377042 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 23:28:56.570794 1377042 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 23:28:56.857955 1377042 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 23:28:57.402566 1377042 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1027 23:28:57.852594 1377042 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1027 23:28:57.854604 1377042 kubeadm.go:319] 
	I1027 23:28:57.854682 1377042 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1027 23:28:57.854688 1377042 kubeadm.go:319] 
	I1027 23:28:57.854768 1377042 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1027 23:28:57.854773 1377042 kubeadm.go:319] 
	I1027 23:28:57.854799 1377042 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1027 23:28:57.854860 1377042 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 23:28:57.854913 1377042 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 23:28:57.854917 1377042 kubeadm.go:319] 
	I1027 23:28:57.854974 1377042 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1027 23:28:57.854990 1377042 kubeadm.go:319] 
	I1027 23:28:57.855040 1377042 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 23:28:57.855045 1377042 kubeadm.go:319] 
	I1027 23:28:57.855099 1377042 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1027 23:28:57.855176 1377042 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 23:28:57.855247 1377042 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 23:28:57.855252 1377042 kubeadm.go:319] 
	I1027 23:28:57.855339 1377042 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 23:28:57.855419 1377042 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1027 23:28:57.855424 1377042 kubeadm.go:319] 
	I1027 23:28:57.855511 1377042 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sau6o0.jgmkfp8yv3ipo3ir \
	I1027 23:28:57.855619 1377042 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 \
	I1027 23:28:57.855640 1377042 kubeadm.go:319] 	--control-plane 
	I1027 23:28:57.855644 1377042 kubeadm.go:319] 
	I1027 23:28:57.855732 1377042 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1027 23:28:57.855737 1377042 kubeadm.go:319] 
	I1027 23:28:57.855822 1377042 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sau6o0.jgmkfp8yv3ipo3ir \
	I1027 23:28:57.855928 1377042 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:13027682bf450cb117a78e82ca472f74d12feb85b84d85419618dfd9b7be1480 
	I1027 23:28:57.859548 1377042 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 23:28:57.859798 1377042 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 23:28:57.859915 1377042 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 23:28:57.860056 1377042 cni.go:84] Creating CNI manager for ""
	I1027 23:28:57.860085 1377042 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:28:57.866087 1377042 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 23:28:57.869047 1377042 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 23:28:57.873787 1377042 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 23:28:57.873804 1377042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 23:28:57.900716 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 23:28:58.304011 1377042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 23:28:58.304137 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:28:58.304222 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-852936 minikube.k8s.io/updated_at=2025_10_27T23_28_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f minikube.k8s.io/name=newest-cni-852936 minikube.k8s.io/primary=true
	I1027 23:28:58.583152 1377042 ops.go:34] apiserver oom_adj: -16
	I1027 23:28:58.583260 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:28:59.083773 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:28:59.583352 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:29:00.083507 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:29:00.584121 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1027 23:28:57.198229 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:28:59.200165 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:01.083794 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:29:01.583474 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:29:02.084165 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:29:02.584096 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:29:03.084209 1377042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 23:29:03.223283 1377042 kubeadm.go:1114] duration metric: took 4.91918821s to wait for elevateKubeSystemPrivileges
	I1027 23:29:03.223314 1377042 kubeadm.go:403] duration metric: took 27.299673229s to StartCluster
	I1027 23:29:03.223331 1377042 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:03.223404 1377042 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:03.224344 1377042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:03.224558 1377042 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:29:03.224673 1377042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 23:29:03.224922 1377042 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:03.224929 1377042 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:29:03.225026 1377042 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-852936"
	I1027 23:29:03.225040 1377042 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-852936"
	I1027 23:29:03.225069 1377042 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:03.225540 1377042 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:03.225815 1377042 addons.go:69] Setting default-storageclass=true in profile "newest-cni-852936"
	I1027 23:29:03.225830 1377042 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852936"
	I1027 23:29:03.226108 1377042 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:03.228637 1377042 out.go:179] * Verifying Kubernetes components...
	I1027 23:29:03.235114 1377042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:03.261801 1377042 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:29:03.266493 1377042 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:03.266520 1377042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:29:03.266582 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:03.266831 1377042 addons.go:238] Setting addon default-storageclass=true in "newest-cni-852936"
	I1027 23:29:03.266867 1377042 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:03.267274 1377042 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:03.314603 1377042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:03.314654 1377042 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:03.314668 1377042 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:29:03.314737 1377042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:03.343851 1377042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34594 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:03.624179 1377042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 23:29:03.624311 1377042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:03.646665 1377042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:03.683699 1377042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:04.241150 1377042 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:29:04.241217 1377042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:29:04.241311 1377042 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 23:29:04.425681 1377042 api_server.go:72] duration metric: took 1.201086357s to wait for apiserver process to appear ...
	I1027 23:29:04.425701 1377042 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:29:04.425720 1377042 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:04.439200 1377042 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:29:04.440822 1377042 api_server.go:141] control plane version: v1.34.1
	I1027 23:29:04.440853 1377042 api_server.go:131] duration metric: took 15.144844ms to wait for apiserver health ...
	I1027 23:29:04.440862 1377042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:29:04.448423 1377042 system_pods.go:59] 8 kube-system pods found
	I1027 23:29:04.448529 1377042 system_pods.go:61] "coredns-66bc5c9577-jzn5z" [191e4eff-7490-4e8a-9231-7e634396b226] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:04.448558 1377042 system_pods.go:61] "etcd-newest-cni-852936" [4d42a25f-5e7b-4657-a6f1-d46bc06216dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:29:04.448578 1377042 system_pods.go:61] "kindnet-q6tfx" [b3f08f81-257b-4bba-9acc-4b3c88d70bb7] Running
	I1027 23:29:04.448611 1377042 system_pods.go:61] "kube-apiserver-newest-cni-852936" [090b241c-c08c-4306-b40c-871e5421048b] Running
	I1027 23:29:04.448637 1377042 system_pods.go:61] "kube-controller-manager-newest-cni-852936" [5016a35c-4906-416f-981d-3d8eafafac9d] Running
	I1027 23:29:04.448660 1377042 system_pods.go:61] "kube-proxy-qcz7m" [8263ca0a-34e2-4388-82ba-1714b8940cba] Running
	I1027 23:29:04.448694 1377042 system_pods.go:61] "kube-scheduler-newest-cni-852936" [4f47dc44-57da-47eb-b115-12f3d5bac007] Running
	I1027 23:29:04.448719 1377042 system_pods.go:61] "storage-provisioner" [ebb4e6b7-17b5-43ab-b54c-34a6b5b2caa2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:04.448741 1377042 system_pods.go:74] duration metric: took 7.872857ms to wait for pod list to return data ...
	I1027 23:29:04.448778 1377042 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:29:04.452071 1377042 default_sa.go:45] found service account: "default"
	I1027 23:29:04.452095 1377042 default_sa.go:55] duration metric: took 3.294044ms for default service account to be created ...
	I1027 23:29:04.452107 1377042 kubeadm.go:587] duration metric: took 1.227517653s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:04.452124 1377042 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:29:04.452199 1377042 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 23:29:04.456177 1377042 addons.go:514] duration metric: took 1.231237566s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 23:29:04.456978 1377042 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:29:04.457015 1377042 node_conditions.go:123] node cpu capacity is 2
	I1027 23:29:04.457028 1377042 node_conditions.go:105] duration metric: took 4.899384ms to run NodePressure ...
	I1027 23:29:04.457040 1377042 start.go:242] waiting for startup goroutines ...
	I1027 23:29:04.744776 1377042 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-852936" context rescaled to 1 replicas
	I1027 23:29:04.744816 1377042 start.go:247] waiting for cluster config update ...
	I1027 23:29:04.744830 1377042 start.go:256] writing updated cluster config ...
	I1027 23:29:04.745134 1377042 ssh_runner.go:195] Run: rm -f paused
	I1027 23:29:04.843707 1377042 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:04.848726 1377042 out.go:179] * Done! kubectl is now configured to use "newest-cni-852936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.452925778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.45701738Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bee5acbd-d635-4de5-a3db-0255a2765bf0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.46654292Z" level=info msg="Running pod sandbox: kube-system/kindnet-q6tfx/POD" id=49de9f71-cccc-475f-b8e5-34c7b13a57ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.467662001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.469063448Z" level=info msg="Ran pod sandbox 490abedd9a8fec25e8164db74e1284e26c966e8eff434cbbba4c49f85fb8c1b0 with infra container: kube-system/kube-proxy-qcz7m/POD" id=bee5acbd-d635-4de5-a3db-0255a2765bf0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.472732496Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=49de9f71-cccc-475f-b8e5-34c7b13a57ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.474691195Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=54c92129-dd73-4d27-908b-b4d189f1fef9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.477734454Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8d89904f-f3df-43a8-9abe-7f6d237770b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.487051704Z" level=info msg="Creating container: kube-system/kube-proxy-qcz7m/kube-proxy" id=b23bf940-0e17-43d8-9962-0bcf58e6b575 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.487156411Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.48823911Z" level=info msg="Ran pod sandbox eaa3a7ef77b31f31a4af4490cafadd4096e081ba3fff7d9e8c8da37b256012c5 with infra container: kube-system/kindnet-q6tfx/POD" id=49de9f71-cccc-475f-b8e5-34c7b13a57ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.497937699Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6c4878f0-77e9-4c32-ab3d-26a809dab995 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.499158852Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9181a57b-2703-414b-a2e5-8a8dffbd8bbf name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.493027386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.500704851Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.537068282Z" level=info msg="Creating container: kube-system/kindnet-q6tfx/kindnet-cni" id=aef2b867-eaf1-41cb-aaa9-1cedc88f07dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.537202635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.548540935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.549059179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.587872038Z" level=info msg="Created container 833e1434f11f7b2c9a2bacd24a71369131b93f6d545db09e46b459cc7b2c3963: kube-system/kindnet-q6tfx/kindnet-cni" id=aef2b867-eaf1-41cb-aaa9-1cedc88f07dd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.596976166Z" level=info msg="Starting container: 833e1434f11f7b2c9a2bacd24a71369131b93f6d545db09e46b459cc7b2c3963" id=a2f83943-1ec2-47cc-ab6b-2586419dd8b5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.607244093Z" level=info msg="Started container" PID=1457 containerID=833e1434f11f7b2c9a2bacd24a71369131b93f6d545db09e46b459cc7b2c3963 description=kube-system/kindnet-q6tfx/kindnet-cni id=a2f83943-1ec2-47cc-ab6b-2586419dd8b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eaa3a7ef77b31f31a4af4490cafadd4096e081ba3fff7d9e8c8da37b256012c5
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.61272229Z" level=info msg="Created container c252a9a2f57881d167ee3023608fcc89933af76f9b82e20ef72a1dfff0d9e370: kube-system/kube-proxy-qcz7m/kube-proxy" id=b23bf940-0e17-43d8-9962-0bcf58e6b575 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.613641885Z" level=info msg="Starting container: c252a9a2f57881d167ee3023608fcc89933af76f9b82e20ef72a1dfff0d9e370" id=83ed26b5-c4df-4d77-a784-792b4da3f95c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:03 newest-cni-852936 crio[837]: time="2025-10-27T23:29:03.617428777Z" level=info msg="Started container" PID=1456 containerID=c252a9a2f57881d167ee3023608fcc89933af76f9b82e20ef72a1dfff0d9e370 description=kube-system/kube-proxy-qcz7m/kube-proxy id=83ed26b5-c4df-4d77-a784-792b4da3f95c name=/runtime.v1.RuntimeService/StartContainer sandboxID=490abedd9a8fec25e8164db74e1284e26c966e8eff434cbbba4c49f85fb8c1b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	833e1434f11f7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   eaa3a7ef77b31       kindnet-q6tfx                               kube-system
	c252a9a2f5788       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   490abedd9a8fe       kube-proxy-qcz7m                            kube-system
	c693936ea3592       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago      Running             kube-scheduler            0                   c695bb9836877       kube-scheduler-newest-cni-852936            kube-system
	3d5468a31dd83       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago      Running             kube-apiserver            0                   51a3db092e48c       kube-apiserver-newest-cni-852936            kube-system
	78ad51c9a063e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago      Running             kube-controller-manager   0                   33a3e0fe19574       kube-controller-manager-newest-cni-852936   kube-system
	8a16f56e841d9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago      Running             etcd                      0                   2a68eb8a6a57e       etcd-newest-cni-852936                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-852936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-852936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=newest-cni-852936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_28_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:28:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-852936
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:28:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:28:57 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:28:57 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:28:57 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 23:28:57 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-852936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                92cc5ecf-38b1-42c9-8ddf-bd258bac7f0d
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-852936                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-q6tfx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-852936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-852936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-qcz7m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-852936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  20s (x9 over 20s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-852936 event: Registered Node newest-cni-852936 in Controller
	
	
	==> dmesg <==
	[Oct27 23:04] overlayfs: idmapped layers are currently not supported
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	[Oct27 23:28] overlayfs: idmapped layers are currently not supported
	[ +11.644898] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8a16f56e841d9ac43fca991279cdd2972cb4937059d46283285e4de74117a01b] <==
	{"level":"warn","ts":"2025-10-27T23:28:50.982341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.029964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.064789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.109381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.200229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.232488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.277422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.353520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.401348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.446491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.469004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.507452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.544605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.592469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.620177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.675563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.748567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.759198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.800803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.842996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.874914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.915273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.955569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:51.989824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:52.211769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44910","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:29:06 up  6:11,  0 user,  load average: 5.70, 4.45, 3.59
	Linux newest-cni-852936 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [833e1434f11f7b2c9a2bacd24a71369131b93f6d545db09e46b459cc7b2c3963] <==
	I1027 23:29:03.719528       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:29:03.719884       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:29:03.720014       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:29:03.720026       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:29:03.720040       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:29:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:29:03.917616       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:29:03.917633       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:29:03.917670       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:29:03.919339       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [3d5468a31dd83c478ac1e377c96322cc8d7468c330388d5712d6f22fe2b1279f] <==
	I1027 23:28:54.366487       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:28:54.463443       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1027 23:28:54.464300       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 23:28:54.487134       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 23:28:54.580432       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:28:54.580818       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:28:54.622033       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 23:28:54.643224       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:28:54.729668       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 23:28:54.759297       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 23:28:54.759485       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:28:55.873202       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:28:55.936600       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:28:56.127938       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 23:28:56.144380       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 23:28:56.146029       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:28:56.152254       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:28:57.016374       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:28:57.368208       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:28:57.400893       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 23:28:57.434581       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 23:29:02.694083       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:29:03.000818       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:29:03.007046       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:29:03.101718       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [78ad51c9a063ec99fbf71a2b4186059ca9a048daea93c1f446349bd90358705a] <==
	I1027 23:29:01.961329       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 23:29:01.961380       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 23:29:01.961421       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 23:29:01.987754       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:29:01.996295       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 23:29:01.996919       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 23:29:01.974727       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:29:01.974750       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:29:01.974761       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 23:29:02.000724       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 23:29:01.940349       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 23:29:01.940202       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 23:29:01.998368       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:29:02.000358       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 23:29:02.000382       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 23:29:01.940281       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:29:02.003378       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:29:01.940136       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:29:02.004592       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:29:02.004810       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-852936"
	I1027 23:29:02.018122       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 23:29:02.046317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:29:02.098008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:29:02.098038       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:29:02.098047       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c252a9a2f57881d167ee3023608fcc89933af76f9b82e20ef72a1dfff0d9e370] <==
	I1027 23:29:03.815936       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:29:03.940061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:29:04.040375       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:29:04.040416       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 23:29:04.040483       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:29:04.115347       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:29:04.119730       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:29:04.143883       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:29:04.144173       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:29:04.144188       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:29:04.152413       1 config.go:200] "Starting service config controller"
	I1027 23:29:04.152432       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:29:04.152454       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:29:04.152458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:29:04.152469       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:29:04.152474       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:29:04.159253       1 config.go:309] "Starting node config controller"
	I1027 23:29:04.159270       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:29:04.159277       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:29:04.255265       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:29:04.255309       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:29:04.255365       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c693936ea3592bf4785d14338cf9c27baee76d0768eaed1cf4886261a608a3e0] <==
	I1027 23:28:54.895333       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:28:54.895504       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:28:54.895641       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:28:54.895935       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 23:28:54.906489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 23:28:54.923050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 23:28:54.923340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 23:28:54.923510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 23:28:54.923539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 23:28:54.923670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 23:28:54.923720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 23:28:54.923759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 23:28:54.923793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 23:28:54.923821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 23:28:54.923853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 23:28:54.923886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 23:28:54.923915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 23:28:54.926206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 23:28:54.926257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 23:28:54.926293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 23:28:54.926505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 23:28:54.926633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 23:28:54.926704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 23:28:55.786857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1027 23:28:58.096070       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:28:57 newest-cni-852936 kubelet[1300]: I1027 23:28:57.996047    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/447484e7bda7562280ea985d53c7525b-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-852936\" (UID: \"447484e7bda7562280ea985d53c7525b\") " pod="kube-system/kube-controller-manager-newest-cni-852936"
	Oct 27 23:28:57 newest-cni-852936 kubelet[1300]: I1027 23:28:57.996090    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/447484e7bda7562280ea985d53c7525b-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-852936\" (UID: \"447484e7bda7562280ea985d53c7525b\") " pod="kube-system/kube-controller-manager-newest-cni-852936"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: I1027 23:28:58.432095    1300 apiserver.go:52] "Watching apiserver"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: I1027 23:28:58.488920    1300 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: I1027 23:28:58.721310    1300 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-852936"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: E1027 23:28:58.736360    1300 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-852936\" already exists" pod="kube-system/etcd-newest-cni-852936"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: I1027 23:28:58.852574    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-852936" podStartSLOduration=1.8525554610000001 podStartE2EDuration="1.852555461s" podCreationTimestamp="2025-10-27 23:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:28:58.812769582 +0000 UTC m=+1.522228659" watchObservedRunningTime="2025-10-27 23:28:58.852555461 +0000 UTC m=+1.562014530"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: I1027 23:28:58.853246    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-852936" podStartSLOduration=1.853234552 podStartE2EDuration="1.853234552s" podCreationTimestamp="2025-10-27 23:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:28:58.853010868 +0000 UTC m=+1.562470028" watchObservedRunningTime="2025-10-27 23:28:58.853234552 +0000 UTC m=+1.562693629"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: I1027 23:28:58.927980    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-852936" podStartSLOduration=1.927949814 podStartE2EDuration="1.927949814s" podCreationTimestamp="2025-10-27 23:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:28:58.895887958 +0000 UTC m=+1.605347035" watchObservedRunningTime="2025-10-27 23:28:58.927949814 +0000 UTC m=+1.637408891"
	Oct 27 23:28:58 newest-cni-852936 kubelet[1300]: I1027 23:28:58.951205    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-852936" podStartSLOduration=1.951188228 podStartE2EDuration="1.951188228s" podCreationTimestamp="2025-10-27 23:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:28:58.929237529 +0000 UTC m=+1.638696614" watchObservedRunningTime="2025-10-27 23:28:58.951188228 +0000 UTC m=+1.660647305"
	Oct 27 23:29:02 newest-cni-852936 kubelet[1300]: I1027 23:29:02.015339    1300 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 23:29:02 newest-cni-852936 kubelet[1300]: I1027 23:29:02.016113    1300 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242404    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-cni-cfg\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242698    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8263ca0a-34e2-4388-82ba-1714b8940cba-kube-proxy\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242730    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8263ca0a-34e2-4388-82ba-1714b8940cba-xtables-lock\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242749    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv74l\" (UniqueName: \"kubernetes.io/projected/8263ca0a-34e2-4388-82ba-1714b8940cba-kube-api-access-lv74l\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242770    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-xtables-lock\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242798    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt94w\" (UniqueName: \"kubernetes.io/projected/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-kube-api-access-lt94w\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242817    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-lib-modules\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.242835    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8263ca0a-34e2-4388-82ba-1714b8940cba-lib-modules\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.400983    1300 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: W1027 23:29:03.466259    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/crio-490abedd9a8fec25e8164db74e1284e26c966e8eff434cbbba4c49f85fb8c1b0 WatchSource:0}: Error finding container 490abedd9a8fec25e8164db74e1284e26c966e8eff434cbbba4c49f85fb8c1b0: Status 404 returned error can't find the container with id 490abedd9a8fec25e8164db74e1284e26c966e8eff434cbbba4c49f85fb8c1b0
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: W1027 23:29:03.482818    1300 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/crio-eaa3a7ef77b31f31a4af4490cafadd4096e081ba3fff7d9e8c8da37b256012c5 WatchSource:0}: Error finding container eaa3a7ef77b31f31a4af4490cafadd4096e081ba3fff7d9e8c8da37b256012c5: Status 404 returned error can't find the container with id eaa3a7ef77b31f31a4af4490cafadd4096e081ba3fff7d9e8c8da37b256012c5
	Oct 27 23:29:03 newest-cni-852936 kubelet[1300]: I1027 23:29:03.761407    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-q6tfx" podStartSLOduration=0.761390475 podStartE2EDuration="761.390475ms" podCreationTimestamp="2025-10-27 23:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:29:03.760905955 +0000 UTC m=+6.470365032" watchObservedRunningTime="2025-10-27 23:29:03.761390475 +0000 UTC m=+6.470849552"
	Oct 27 23:29:04 newest-cni-852936 kubelet[1300]: I1027 23:29:04.064758    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qcz7m" podStartSLOduration=1.064719518 podStartE2EDuration="1.064719518s" podCreationTimestamp="2025-10-27 23:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:29:03.794080164 +0000 UTC m=+6.503539249" watchObservedRunningTime="2025-10-27 23:29:04.064719518 +0000 UTC m=+6.774178587"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-852936 -n newest-cni-852936
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-852936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jzn5z storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner: exit status 1 (94.601236ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jzn5z" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-852936 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-852936 --alsologtostderr -v=1: exit status 80 (1.655983775s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-852936 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:29:25.283902 1384246 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:29:25.284458 1384246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:25.284782 1384246 out.go:374] Setting ErrFile to fd 2...
	I1027 23:29:25.284808 1384246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:25.285737 1384246 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:29:25.286142 1384246 out.go:368] Setting JSON to false
	I1027 23:29:25.286193 1384246 mustload.go:66] Loading cluster: newest-cni-852936
	I1027 23:29:25.286642 1384246 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:25.287171 1384246 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:25.304900 1384246 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:25.305208 1384246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:25.370247 1384246 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 23:29:25.360774201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:25.371057 1384246 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-852936 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 23:29:25.374596 1384246 out.go:179] * Pausing node newest-cni-852936 ... 
	I1027 23:29:25.378238 1384246 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:25.378752 1384246 ssh_runner.go:195] Run: systemctl --version
	I1027 23:29:25.378805 1384246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:25.396295 1384246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:25.501446 1384246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:29:25.518225 1384246 pause.go:52] kubelet running: true
	I1027 23:29:25.518309 1384246 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:29:25.773701 1384246 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:29:25.773850 1384246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:29:25.857263 1384246 cri.go:89] found id: "e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae"
	I1027 23:29:25.857288 1384246 cri.go:89] found id: "d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d"
	I1027 23:29:25.857294 1384246 cri.go:89] found id: "330dc9b597bf25efc2a585d5e204a8122f12b9d06572abb8eca0714117e09773"
	I1027 23:29:25.857298 1384246 cri.go:89] found id: "7ba655a45a78e5e901dbbfebe2a50cccb83aae7f62e5ff23596fb3ec81ccb126"
	I1027 23:29:25.857301 1384246 cri.go:89] found id: "c24fe513253c6dc838d98980bfab0d60b8ee3c4899660c10d658adf5d75315be"
	I1027 23:29:25.857305 1384246 cri.go:89] found id: "88f79d403d4f728d053f809d89ffcfddf313be934b17854c1851af271cdcc8f3"
	I1027 23:29:25.857308 1384246 cri.go:89] found id: ""
	I1027 23:29:25.857364 1384246 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:29:25.884159 1384246 retry.go:31] will retry after 244.143013ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:25Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:29:26.128518 1384246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:29:26.141926 1384246 pause.go:52] kubelet running: false
	I1027 23:29:26.141990 1384246 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:29:26.311050 1384246 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:29:26.311129 1384246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:29:26.389288 1384246 cri.go:89] found id: "e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae"
	I1027 23:29:26.389308 1384246 cri.go:89] found id: "d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d"
	I1027 23:29:26.389313 1384246 cri.go:89] found id: "330dc9b597bf25efc2a585d5e204a8122f12b9d06572abb8eca0714117e09773"
	I1027 23:29:26.389316 1384246 cri.go:89] found id: "7ba655a45a78e5e901dbbfebe2a50cccb83aae7f62e5ff23596fb3ec81ccb126"
	I1027 23:29:26.389320 1384246 cri.go:89] found id: "c24fe513253c6dc838d98980bfab0d60b8ee3c4899660c10d658adf5d75315be"
	I1027 23:29:26.389323 1384246 cri.go:89] found id: "88f79d403d4f728d053f809d89ffcfddf313be934b17854c1851af271cdcc8f3"
	I1027 23:29:26.389326 1384246 cri.go:89] found id: ""
	I1027 23:29:26.389374 1384246 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:29:26.400857 1384246 retry.go:31] will retry after 196.70572ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:26Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:29:26.598333 1384246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:29:26.615568 1384246 pause.go:52] kubelet running: false
	I1027 23:29:26.615655 1384246 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:29:26.759997 1384246 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:29:26.760108 1384246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:29:26.826151 1384246 cri.go:89] found id: "e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae"
	I1027 23:29:26.826174 1384246 cri.go:89] found id: "d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d"
	I1027 23:29:26.826179 1384246 cri.go:89] found id: "330dc9b597bf25efc2a585d5e204a8122f12b9d06572abb8eca0714117e09773"
	I1027 23:29:26.826182 1384246 cri.go:89] found id: "7ba655a45a78e5e901dbbfebe2a50cccb83aae7f62e5ff23596fb3ec81ccb126"
	I1027 23:29:26.826185 1384246 cri.go:89] found id: "c24fe513253c6dc838d98980bfab0d60b8ee3c4899660c10d658adf5d75315be"
	I1027 23:29:26.826189 1384246 cri.go:89] found id: "88f79d403d4f728d053f809d89ffcfddf313be934b17854c1851af271cdcc8f3"
	I1027 23:29:26.826192 1384246 cri.go:89] found id: ""
	I1027 23:29:26.826242 1384246 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:29:26.840814 1384246 out.go:203] 
	W1027 23:29:26.843683 1384246 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 23:29:26.843752 1384246 out.go:285] * 
	* 
	W1027 23:29:26.853075 1384246 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:29:26.855891 1384246 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-852936 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-852936
helpers_test.go:243: (dbg) docker inspect newest-cni-852936:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb",
	        "Created": "2025-10-27T23:28:26.049254307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1382512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:29:10.063857156Z",
	            "FinishedAt": "2025-10-27T23:29:09.245239915Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/hosts",
	        "LogPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb-json.log",
	        "Name": "/newest-cni-852936",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-852936:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-852936",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb",
	                "LowerDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/merged",
	                "UpperDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/diff",
	                "WorkDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-852936",
	                "Source": "/var/lib/docker/volumes/newest-cni-852936/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-852936",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-852936",
	                "name.minikube.sigs.k8s.io": "newest-cni-852936",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6d916a599100e7d9736b51565186c500ba78176398322ddf895a1204ac23c25",
	            "SandboxKey": "/var/run/docker/netns/e6d916a59910",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34604"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34605"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34608"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34606"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34607"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-852936": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:2e:4a:30:b7:e0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cc9b34231316ca6e2b3bcce7977749e2a63825d24e6f604ea63947f22c91175",
	                    "EndpointID": "19241d30e867fba1f7bc7078f90f06bf0cca7083b14d39ca75aec3e358f22f1c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-852936",
	                        "65a8d98d29dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936
E1027 23:29:27.167781 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936: exit status 2 (374.788668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-852936 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-852936 logs -n 25: (1.085207933s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-336451 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-336451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	│ stop    │ -p newest-cni-852936 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-852936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ image   │ newest-cni-852936 image list --format=json                                                                                                                                                                                                    │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ pause   │ -p newest-cni-852936 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:29:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:29:09.766117 1382384 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:29:09.766264 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766276 1382384 out.go:374] Setting ErrFile to fd 2...
	I1027 23:29:09.766281 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766839 1382384 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:29:09.767786 1382384 out.go:368] Setting JSON to false
	I1027 23:29:09.769056 1382384 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22299,"bootTime":1761585451,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:29:09.769139 1382384 start.go:143] virtualization:  
	I1027 23:29:09.772858 1382384 out.go:179] * [newest-cni-852936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:29:09.776686 1382384 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:29:09.776813 1382384 notify.go:221] Checking for updates...
	I1027 23:29:09.782290 1382384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:29:09.785212 1382384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:09.788210 1382384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:29:09.791116 1382384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:29:09.793964 1382384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:29:09.797372 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:09.797914 1382384 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:29:09.833947 1382384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:29:09.834073 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.893931 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.878864517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.894062 1382384 docker.go:318] overlay module found
	I1027 23:29:09.897336 1382384 out.go:179] * Using the docker driver based on existing profile
	I1027 23:29:09.900341 1382384 start.go:307] selected driver: docker
	I1027 23:29:09.900381 1382384 start.go:928] validating driver "docker" against &{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.900493 1382384 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:29:09.901343 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.956323 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.947321156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.956662 1382384 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:09.956696 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:09.956755 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:09.956801 1382384 start.go:351] cluster config:
	{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.959906 1382384 out.go:179] * Starting "newest-cni-852936" primary control-plane node in "newest-cni-852936" cluster
	I1027 23:29:09.962722 1382384 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:29:09.965885 1382384 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:29:09.968839 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:09.968947 1382384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:29:09.968971 1382384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:29:09.968983 1382384 cache.go:59] Caching tarball of preloaded images
	I1027 23:29:09.969090 1382384 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:29:09.969100 1382384 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:29:09.969208 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:09.999805 1382384 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:29:09.999844 1382384 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:29:09.999859 1382384 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:29:09.999881 1382384 start.go:360] acquireMachinesLock for newest-cni-852936: {Name:mk3f294285068916d485e6bfcdad9561ce18d17d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:29:09.999976 1382384 start.go:364] duration metric: took 68.694µs to acquireMachinesLock for "newest-cni-852936"
	I1027 23:29:10.000031 1382384 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:29:10.000085 1382384 fix.go:55] fixHost starting: 
	I1027 23:29:10.000495 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.025325 1382384 fix.go:113] recreateIfNeeded on newest-cni-852936: state=Stopped err=<nil>
	W1027 23:29:10.025370 1382384 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 23:29:08.700022 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:11.196407 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:10.028623 1382384 out.go:252] * Restarting existing docker container for "newest-cni-852936" ...
	I1027 23:29:10.028792 1382384 cli_runner.go:164] Run: docker start newest-cni-852936
	I1027 23:29:10.308194 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.330658 1382384 kic.go:430] container "newest-cni-852936" state is running.
	I1027 23:29:10.331059 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:10.353242 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:10.353470 1382384 machine.go:94] provisionDockerMachine start ...
	I1027 23:29:10.353542 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:10.372326 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:10.372679 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:10.372697 1382384 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:29:10.373227 1382384 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54926->127.0.0.1:34604: read: connection reset by peer
	I1027 23:29:13.522271 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.522368 1382384 ubuntu.go:182] provisioning hostname "newest-cni-852936"
	I1027 23:29:13.522473 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.543423 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.543747 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.543767 1382384 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852936 && echo "newest-cni-852936" | sudo tee /etc/hostname
	I1027 23:29:13.705024 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.705100 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.724776 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.725087 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.725105 1382384 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852936/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:29:13.874768 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:29:13.874793 1382384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:29:13.874815 1382384 ubuntu.go:190] setting up certificates
	I1027 23:29:13.874826 1382384 provision.go:84] configureAuth start
	I1027 23:29:13.874883 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:13.897512 1382384 provision.go:143] copyHostCerts
	I1027 23:29:13.897574 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:29:13.897589 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:29:13.897665 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:29:13.897760 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:29:13.897765 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:29:13.897791 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:29:13.897849 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:29:13.897854 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:29:13.897875 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:29:13.897919 1382384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852936 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852936]
	I1027 23:29:14.197889 1382384 provision.go:177] copyRemoteCerts
	I1027 23:29:14.198003 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:29:14.198069 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.216790 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.322005 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:29:14.339619 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:29:14.357698 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:29:14.374994 1382384 provision.go:87] duration metric: took 500.144707ms to configureAuth
	I1027 23:29:14.375019 1382384 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:29:14.375217 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:14.375326 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.392639 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:14.392951 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:14.392965 1382384 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:29:14.687600 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:29:14.687621 1382384 machine.go:97] duration metric: took 4.334134462s to provisionDockerMachine
	I1027 23:29:14.687665 1382384 start.go:293] postStartSetup for "newest-cni-852936" (driver="docker")
	I1027 23:29:14.687685 1382384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:29:14.687758 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:29:14.687803 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.707820 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.810235 1382384 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:29:14.813577 1382384 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:29:14.813651 1382384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:29:14.813665 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:29:14.813736 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:29:14.813819 1382384 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:29:14.813926 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:29:14.821590 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:14.839199 1382384 start.go:296] duration metric: took 151.517291ms for postStartSetup
	I1027 23:29:14.839285 1382384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:29:14.839332 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.857380 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.963797 1382384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:29:14.968647 1382384 fix.go:57] duration metric: took 4.968601832s for fixHost
	I1027 23:29:14.968672 1382384 start.go:83] releasing machines lock for "newest-cni-852936", held for 4.96867508s
	I1027 23:29:14.968743 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:14.985572 1382384 ssh_runner.go:195] Run: cat /version.json
	I1027 23:29:14.985633 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.985873 1382384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:29:14.985939 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:15.005851 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.021224 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.134518 1382384 ssh_runner.go:195] Run: systemctl --version
	I1027 23:29:15.236918 1382384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:29:15.280309 1382384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:29:15.285018 1382384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:29:15.285087 1382384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:29:15.293768 1382384 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:29:15.293791 1382384 start.go:496] detecting cgroup driver to use...
	I1027 23:29:15.293821 1382384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:29:15.293867 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:29:15.309499 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:29:15.323058 1382384 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:29:15.323175 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:29:15.339572 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:29:15.354227 1382384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:29:15.468373 1382384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:29:15.591069 1382384 docker.go:234] disabling docker service ...
	I1027 23:29:15.591189 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:29:15.606878 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:29:15.620798 1382384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:29:15.748929 1382384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:29:15.872886 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:29:15.890660 1382384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:29:15.906654 1382384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:29:15.906761 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.916506 1382384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:29:15.916600 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.926592 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.936286 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.945124 1382384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:29:15.953537 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.962746 1382384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.971004 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.979956 1382384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:29:15.987602 1382384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:29:16.001973 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.135477 1382384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:29:16.286541 1382384 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:29:16.286667 1382384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:29:16.291239 1382384 start.go:564] Will wait 60s for crictl version
	I1027 23:29:16.291360 1382384 ssh_runner.go:195] Run: which crictl
	I1027 23:29:16.294882 1382384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:29:16.321680 1382384 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:29:16.321849 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.360828 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.393456 1382384 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:29:16.396391 1382384 cli_runner.go:164] Run: docker network inspect newest-cni-852936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:29:16.413033 1382384 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:29:16.416904 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.429883 1382384 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1027 23:29:13.697418 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:16.200317 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:16.432630 1382384 kubeadm.go:884] updating cluster {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:29:16.432775 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:16.432862 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.470089 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.470114 1382384 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:29:16.470176 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.502365 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.502412 1382384 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:29:16.502461 1382384 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:29:16.502589 1382384 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-852936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:29:16.502687 1382384 ssh_runner.go:195] Run: crio config
	I1027 23:29:16.576598 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:16.576620 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:16.576659 1382384 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 23:29:16.576689 1382384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-852936 NodeName:newest-cni-852936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:29:16.576834 1382384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-852936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:29:16.576908 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:29:16.584945 1382384 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:29:16.585026 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:29:16.592502 1382384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:29:16.605849 1382384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:29:16.620041 1382384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 23:29:16.633545 1382384 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:29:16.637404 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.648272 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.775190 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:16.792568 1382384 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936 for IP: 192.168.85.2
	I1027 23:29:16.792586 1382384 certs.go:195] generating shared ca certs ...
	I1027 23:29:16.792601 1382384 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:16.792765 1382384 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:29:16.792821 1382384 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:29:16.792833 1382384 certs.go:257] generating profile certs ...
	I1027 23:29:16.792916 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.key
	I1027 23:29:16.792993 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b
	I1027 23:29:16.793036 1382384 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key
	I1027 23:29:16.793150 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:29:16.793181 1382384 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:29:16.793202 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:29:16.793228 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:29:16.793255 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:29:16.793281 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:29:16.793330 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:16.793917 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:29:16.812607 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:29:16.829964 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:29:16.856222 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:29:16.873487 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:29:16.894161 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:29:16.922923 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:29:16.959397 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:29:17.006472 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:29:17.049337 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:29:17.081201 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:29:17.106034 1382384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:29:17.121728 1382384 ssh_runner.go:195] Run: openssl version
	I1027 23:29:17.129224 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:29:17.145507 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149674 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149765 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.196710 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:29:17.206114 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:29:17.214593 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218366 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218534 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.260208 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:29:17.268391 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:29:17.276997 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281271 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281338 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.323641 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:29:17.331756 1382384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:29:17.335672 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:29:17.382471 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:29:17.424359 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:29:17.467561 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:29:17.513139 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:29:17.567837 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:29:17.618470 1382384 kubeadm.go:401] StartCluster: {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:17.618617 1382384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:29:17.618713 1382384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:29:17.693163 1382384 cri.go:89] found id: ""
	I1027 23:29:17.693280 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:29:17.707954 1382384 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:29:17.708031 1382384 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:29:17.708118 1382384 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:29:17.719144 1382384 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:29:17.719791 1382384 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-852936" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.720118 1382384 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-852936" cluster setting kubeconfig missing "newest-cni-852936" context setting]
	I1027 23:29:17.720642 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.722636 1382384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:29:17.745586 1382384 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:29:17.745669 1382384 kubeadm.go:602] duration metric: took 37.617775ms to restartPrimaryControlPlane
	I1027 23:29:17.745694 1382384 kubeadm.go:403] duration metric: took 127.234259ms to StartCluster
	I1027 23:29:17.745742 1382384 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.745841 1382384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.746909 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.747200 1382384 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:29:17.747688 1382384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:29:17.747770 1382384 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-852936"
	I1027 23:29:17.747783 1382384 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-852936"
	W1027 23:29:17.747789 1382384 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:29:17.747811 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.748343 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.748641 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:17.748732 1382384 addons.go:69] Setting dashboard=true in profile "newest-cni-852936"
	I1027 23:29:17.748772 1382384 addons.go:238] Setting addon dashboard=true in "newest-cni-852936"
	W1027 23:29:17.748798 1382384 addons.go:247] addon dashboard should already be in state true
	I1027 23:29:17.748847 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.749340 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.749806 1382384 addons.go:69] Setting default-storageclass=true in profile "newest-cni-852936"
	I1027 23:29:17.749822 1382384 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852936"
	I1027 23:29:17.750092 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.759323 1382384 out.go:179] * Verifying Kubernetes components...
	I1027 23:29:17.772375 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:17.800819 1382384 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:29:17.801942 1382384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:29:17.806725 1382384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:17.806761 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:29:17.806795 1382384 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:29:17.806836 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.811489 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:29:17.811514 1382384 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:29:17.811591 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.822485 1382384 addons.go:238] Setting addon default-storageclass=true in "newest-cni-852936"
	W1027 23:29:17.822507 1382384 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:29:17.822532 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.822969 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.865449 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.877907 1382384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:17.877928 1382384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:29:17.877992 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.879738 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.900212 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:18.077474 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:18.149293 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:18.160724 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:18.236930 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:29:18.237002 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:29:18.328299 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:29:18.328364 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:29:18.383950 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:29:18.384014 1382384 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:29:18.408588 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:29:18.408653 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:29:18.442883 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:29:18.442954 1382384 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:29:18.464941 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:29:18.465009 1382384 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:29:18.491431 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:29:18.491509 1382384 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:29:18.511476 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:29:18.511545 1382384 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:29:18.536825 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:29:18.536903 1382384 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:29:18.559539 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1027 23:29:18.696525 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:20.699896 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:21.700112 1377654 pod_ready.go:94] pod "coredns-66bc5c9577-lzssb" is "Ready"
	I1027 23:29:21.700136 1377654 pod_ready.go:86] duration metric: took 36.009275195s for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.703421 1377654 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.715777 1377654 pod_ready.go:94] pod "etcd-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.715842 1377654 pod_ready.go:86] duration metric: took 12.348506ms for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.719027 1377654 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.728322 1377654 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.728398 1377654 pod_ready.go:86] duration metric: took 9.29462ms for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.732228 1377654 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.895924 1377654 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.896004 1377654 pod_ready.go:86] duration metric: took 163.695676ms for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.098328 1377654 pod_ready.go:83] waiting for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.494663 1377654 pod_ready.go:94] pod "kube-proxy-n4vzn" is "Ready"
	I1027 23:29:22.494740 1377654 pod_ready.go:86] duration metric: took 396.322861ms for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.694755 1377654 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095902 1377654 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:23.095941 1377654 pod_ready.go:86] duration metric: took 401.110104ms for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095954 1377654 pod_ready.go:40] duration metric: took 37.409990985s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:29:23.191426 1377654 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:23.194537 1377654 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-336451" cluster and "default" namespace by default
	I1027 23:29:23.865678 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.78812787s)
	I1027 23:29:23.865733 1382384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.716420658s)
	I1027 23:29:23.865764 1382384 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:29:23.865819 1382384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:29:23.865890 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.705145908s)
	I1027 23:29:23.866282 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.30666267s)
	I1027 23:29:23.869166 1382384 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-852936 addons enable metrics-server
	
	I1027 23:29:23.896432 1382384 api_server.go:72] duration metric: took 6.149164962s to wait for apiserver process to appear ...
	I1027 23:29:23.896452 1382384 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:29:23.896472 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:23.905254 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:29:23.905324 1382384 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:29:23.915351 1382384 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:29:23.918229 1382384 addons.go:514] duration metric: took 6.170528043s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:29:24.396619 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:24.404992 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:29:24.406155 1382384 api_server.go:141] control plane version: v1.34.1
	I1027 23:29:24.406180 1382384 api_server.go:131] duration metric: took 509.720774ms to wait for apiserver health ...
	I1027 23:29:24.406189 1382384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:29:24.409864 1382384 system_pods.go:59] 8 kube-system pods found
	I1027 23:29:24.409906 1382384 system_pods.go:61] "coredns-66bc5c9577-jzn5z" [191e4eff-7490-4e8a-9231-7e634396b226] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.409916 1382384 system_pods.go:61] "etcd-newest-cni-852936" [4d42a25f-5e7b-4657-a6f1-d46bc06216dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:29:24.409949 1382384 system_pods.go:61] "kindnet-q6tfx" [b3f08f81-257b-4bba-9acc-4b3c88d70bb7] Running
	I1027 23:29:24.409959 1382384 system_pods.go:61] "kube-apiserver-newest-cni-852936" [090b241c-c08c-4306-b40c-871e5421048b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:29:24.409967 1382384 system_pods.go:61] "kube-controller-manager-newest-cni-852936" [5016a35c-4906-416f-981d-3d8eafafac9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:29:24.409976 1382384 system_pods.go:61] "kube-proxy-qcz7m" [8263ca0a-34e2-4388-82ba-1714b8940cba] Running
	I1027 23:29:24.409988 1382384 system_pods.go:61] "kube-scheduler-newest-cni-852936" [4f47dc44-57da-47eb-b115-12f3d5bac007] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:29:24.409994 1382384 system_pods.go:61] "storage-provisioner" [ebb4e6b7-17b5-43ab-b54c-34a6b5b2caa2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.410017 1382384 system_pods.go:74] duration metric: took 3.807388ms to wait for pod list to return data ...
	I1027 23:29:24.410063 1382384 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:29:24.412702 1382384 default_sa.go:45] found service account: "default"
	I1027 23:29:24.412729 1382384 default_sa.go:55] duration metric: took 2.657145ms for default service account to be created ...
	I1027 23:29:24.412743 1382384 kubeadm.go:587] duration metric: took 6.665481562s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:24.412760 1382384 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:29:24.415832 1382384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:29:24.415864 1382384 node_conditions.go:123] node cpu capacity is 2
	I1027 23:29:24.415877 1382384 node_conditions.go:105] duration metric: took 3.112233ms to run NodePressure ...
	I1027 23:29:24.415891 1382384 start.go:242] waiting for startup goroutines ...
	I1027 23:29:24.415931 1382384 start.go:247] waiting for cluster config update ...
	I1027 23:29:24.415944 1382384 start.go:256] writing updated cluster config ...
	I1027 23:29:24.416251 1382384 ssh_runner.go:195] Run: rm -f paused
	I1027 23:29:24.473504 1382384 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:24.476808 1382384 out.go:179] * Done! kubectl is now configured to use "newest-cni-852936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.709444344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.713242478Z" level=info msg="Running pod sandbox: kube-system/kindnet-q6tfx/POD" id=5989fccc-9a0d-4922-9636-6adca3cc973e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.713679259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.716158416Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=006217a1-9a85-41da-9aa9-8973c2ad6903 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.733641303Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5989fccc-9a0d-4922-9636-6adca3cc973e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.736952841Z" level=info msg="Ran pod sandbox ac98f4ee737bac6331d902e3203be8998a0746f2130b093d82070abff99222e3 with infra container: kube-system/kube-proxy-qcz7m/POD" id=006217a1-9a85-41da-9aa9-8973c2ad6903 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.740999617Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3fb24a92-91f1-417d-9e8c-42e0e4ddd5f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.746715143Z" level=info msg="Ran pod sandbox c7accf6121c8bdb83946cf64da140bfaffe6caff774f83285db36d9c36c8e87a with infra container: kube-system/kindnet-q6tfx/POD" id=5989fccc-9a0d-4922-9636-6adca3cc973e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.749479137Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3a596dfe-a682-4f8e-a19f-565fb85e62ac name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.749870429Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=25ad7ef2-e5b0-4a14-864f-fc592b647119 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.753583383Z" level=info msg="Creating container: kube-system/kube-proxy-qcz7m/kube-proxy" id=f85b0974-706d-4779-86fb-a19657c0f7a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.754169607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.753875876Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d77a8a31-ade2-4311-9d1a-97b311d83c8d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.766788204Z" level=info msg="Creating container: kube-system/kindnet-q6tfx/kindnet-cni" id=a1a4730f-ebcd-49cf-b904-6338ebb52ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.767100118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.78011498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.780859679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.783440286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.787016196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.843969249Z" level=info msg="Created container d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d: kube-system/kindnet-q6tfx/kindnet-cni" id=a1a4730f-ebcd-49cf-b904-6338ebb52ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.844759889Z" level=info msg="Starting container: d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d" id=d5aa0a43-8a8e-4618-a94a-1d245286f01d name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.847825877Z" level=info msg="Started container" PID=1053 containerID=d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d description=kube-system/kindnet-q6tfx/kindnet-cni id=d5aa0a43-8a8e-4618-a94a-1d245286f01d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7accf6121c8bdb83946cf64da140bfaffe6caff774f83285db36d9c36c8e87a
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.910297595Z" level=info msg="Created container e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae: kube-system/kube-proxy-qcz7m/kube-proxy" id=f85b0974-706d-4779-86fb-a19657c0f7a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.917423504Z" level=info msg="Starting container: e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae" id=9cd087fb-5d6b-4871-a207-7e6194df99be name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.922899938Z" level=info msg="Started container" PID=1054 containerID=e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae description=kube-system/kube-proxy-qcz7m/kube-proxy id=9cd087fb-5d6b-4871-a207-7e6194df99be name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac98f4ee737bac6331d902e3203be8998a0746f2130b093d82070abff99222e3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e2a7c79144913       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   ac98f4ee737ba       kube-proxy-qcz7m                            kube-system
	d84aeb60c3d67       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   c7accf6121c8b       kindnet-q6tfx                               kube-system
	330dc9b597bf2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   44a7e9ca9bd38       kube-scheduler-newest-cni-852936            kube-system
	7ba655a45a78e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   74e8b8cb55b76       kube-controller-manager-newest-cni-852936   kube-system
	c24fe513253c6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   5c5ca8e1a7ef4       kube-apiserver-newest-cni-852936            kube-system
	88f79d403d4f7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   973cd30ccde51       etcd-newest-cni-852936                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-852936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-852936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=newest-cni-852936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_28_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:28:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-852936
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:29:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-852936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                92cc5ecf-38b1-42c9-8ddf-bd258bac7f0d
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-852936                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-q6tfx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-852936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-852936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-qcz7m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-852936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x9 over 42s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x7 over 42s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-852936 event: Registered Node newest-cni-852936 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-852936 event: Registered Node newest-cni-852936 in Controller
	
	
	==> dmesg <==
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	[Oct27 23:28] overlayfs: idmapped layers are currently not supported
	[ +11.644898] overlayfs: idmapped layers are currently not supported
	[Oct27 23:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [88f79d403d4f728d053f809d89ffcfddf313be934b17854c1851af271cdcc8f3] <==
	{"level":"warn","ts":"2025-10-27T23:29:19.759958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.783497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.811796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.838334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.860180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.876609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.899563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.926589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.948573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.975840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.034820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.068943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.087052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.121796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.160334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.191064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.236305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.270089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.323041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.362564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.391700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.441389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.502283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.519667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.618707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45094","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:29:28 up  6:11,  0 user,  load average: 5.67, 4.53, 3.63
	Linux newest-cni-852936 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d] <==
	I1027 23:29:22.942619       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:29:22.948063       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:29:22.948207       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:29:22.948220       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:29:22.948231       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:29:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:29:23.169468       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:29:23.169494       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:29:23.169510       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:29:23.181064       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c24fe513253c6dc838d98980bfab0d60b8ee3c4899660c10d658adf5d75315be] <==
	I1027 23:29:22.236432       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:29:22.237063       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:29:22.237135       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 23:29:22.237165       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 23:29:22.237171       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 23:29:22.237248       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 23:29:22.237285       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:29:22.251787       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 23:29:22.251878       1 policy_source.go:240] refreshing policies
	I1027 23:29:22.287889       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:29:22.324652       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:29:22.324998       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:29:22.428330       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:29:22.433479       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:29:22.459481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:29:22.757150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:29:23.047481       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:29:23.212429       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:29:23.380806       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:29:23.445221       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:29:23.738356       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.83.219"}
	I1027 23:29:23.799625       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.153.86"}
	I1027 23:29:25.951584       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:29:26.294830       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:29:26.396085       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7ba655a45a78e5e901dbbfebe2a50cccb83aae7f62e5ff23596fb3ec81ccb126] <==
	I1027 23:29:25.979368       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 23:29:25.979592       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:29:25.979685       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 23:29:25.979718       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 23:29:25.979730       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 23:29:25.979736       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 23:29:25.982584       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:29:25.984027       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 23:29:25.986626       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:29:25.987506       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:29:25.987516       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 23:29:25.987538       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 23:29:25.987597       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 23:29:25.989980       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:29:25.995754       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:29:25.995956       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:29:25.996048       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:29:25.996174       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:29:25.996614       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:29:25.996089       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 23:29:25.998320       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-852936"
	I1027 23:29:25.998502       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 23:29:26.003651       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:29:26.008011       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:29:26.010982       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae] <==
	I1027 23:29:23.415421       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:29:23.742710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:29:23.844997       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:29:23.854514       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 23:29:23.854639       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:29:23.901468       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:29:23.901526       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:29:23.913407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:29:23.913727       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:29:23.913739       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:29:23.916579       1 config.go:200] "Starting service config controller"
	I1027 23:29:23.916602       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:29:23.916623       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:29:23.916631       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:29:23.916643       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:29:23.916647       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:29:23.920620       1 config.go:309] "Starting node config controller"
	I1027 23:29:23.920641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:29:23.920650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:29:24.017577       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:29:24.017691       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:29:24.017771       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [330dc9b597bf25efc2a585d5e204a8122f12b9d06572abb8eca0714117e09773] <==
	I1027 23:29:19.997332       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:29:22.567995       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:29:22.568024       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:29:22.584244       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:29:22.584284       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:29:22.584353       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:29:22.584362       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:29:22.584376       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:29:22.584383       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:29:22.585493       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:29:22.585738       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:29:22.690837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:29:22.690905       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:29:22.690997       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:29:20 newest-cni-852936 kubelet[729]: E1027 23:29:20.117232     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-852936\" not found" node="newest-cni-852936"
	Oct 27 23:29:21 newest-cni-852936 kubelet[729]: E1027 23:29:21.110803     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-852936\" not found" node="newest-cni-852936"
	Oct 27 23:29:21 newest-cni-852936 kubelet[729]: I1027 23:29:21.898871     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-852936"
	Oct 27 23:29:21 newest-cni-852936 kubelet[729]: I1027 23:29:21.960658     729 apiserver.go:52] "Watching apiserver"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.182191     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276415     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-cni-cfg\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276481     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-lib-modules\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276514     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8263ca0a-34e2-4388-82ba-1714b8940cba-lib-modules\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276561     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-xtables-lock\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276580     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8263ca0a-34e2-4388-82ba-1714b8940cba-xtables-lock\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.490964     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.504486     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.504594     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.504636     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.507031     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.531006     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-852936\" already exists" pod="kube-system/etcd-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.531042     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.573057     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-852936\" already exists" pod="kube-system/kube-apiserver-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.573094     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.686670     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-852936\" already exists" pod="kube-system/kube-controller-manager-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.686761     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.800427     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-852936\" already exists" pod="kube-system/kube-scheduler-newest-cni-852936"
	Oct 27 23:29:25 newest-cni-852936 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:29:25 newest-cni-852936 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:29:25 newest-cni-852936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-852936 -n newest-cni-852936
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-852936 -n newest-cni-852936: exit status 2 (375.848416ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-852936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z: exit status 1 (88.747048ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jzn5z" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bc2g8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rkp9z" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-852936
helpers_test.go:243: (dbg) docker inspect newest-cni-852936:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb",
	        "Created": "2025-10-27T23:28:26.049254307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1382512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:29:10.063857156Z",
	            "FinishedAt": "2025-10-27T23:29:09.245239915Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/hosts",
	        "LogPath": "/var/lib/docker/containers/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb/65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb-json.log",
	        "Name": "/newest-cni-852936",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-852936:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-852936",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65a8d98d29dcd69d18f14535475393cbcc0834cf172538f60803e2df3f06b4fb",
	                "LowerDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/merged",
	                "UpperDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/diff",
	                "WorkDir": "/var/lib/docker/overlay2/683ddf4845681cbcd053af9f794e7938bfc1ce46288f9101f6ced4d05d48a278/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-852936",
	                "Source": "/var/lib/docker/volumes/newest-cni-852936/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-852936",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-852936",
	                "name.minikube.sigs.k8s.io": "newest-cni-852936",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6d916a599100e7d9736b51565186c500ba78176398322ddf895a1204ac23c25",
	            "SandboxKey": "/var/run/docker/netns/e6d916a59910",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34604"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34605"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34608"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34606"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34607"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-852936": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:2e:4a:30:b7:e0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cc9b34231316ca6e2b3bcce7977749e2a63825d24e6f604ea63947f22c91175",
	                    "EndpointID": "19241d30e867fba1f7bc7078f90f06bf0cca7083b14d39ca75aec3e358f22f1c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-852936",
	                        "65a8d98d29dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936: exit status 2 (376.411214ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-852936 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-852936 logs -n 25: (1.128257274s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-947754 image list --format=json                                                                                                                                                                                                    │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ pause   │ -p no-preload-947754 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p no-preload-947754                                                                                                                                                                                                                          │ no-preload-947754            │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-336451 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-336451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	│ stop    │ -p newest-cni-852936 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-852936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ image   │ newest-cni-852936 image list --format=json                                                                                                                                                                                                    │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ pause   │ -p newest-cni-852936 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:29:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:29:09.766117 1382384 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:29:09.766264 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766276 1382384 out.go:374] Setting ErrFile to fd 2...
	I1027 23:29:09.766281 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766839 1382384 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:29:09.767786 1382384 out.go:368] Setting JSON to false
	I1027 23:29:09.769056 1382384 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22299,"bootTime":1761585451,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:29:09.769139 1382384 start.go:143] virtualization:  
	I1027 23:29:09.772858 1382384 out.go:179] * [newest-cni-852936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:29:09.776686 1382384 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:29:09.776813 1382384 notify.go:221] Checking for updates...
	I1027 23:29:09.782290 1382384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:29:09.785212 1382384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:09.788210 1382384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:29:09.791116 1382384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:29:09.793964 1382384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:29:09.797372 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:09.797914 1382384 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:29:09.833947 1382384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:29:09.834073 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.893931 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.878864517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.894062 1382384 docker.go:318] overlay module found
	I1027 23:29:09.897336 1382384 out.go:179] * Using the docker driver based on existing profile
	I1027 23:29:09.900341 1382384 start.go:307] selected driver: docker
	I1027 23:29:09.900381 1382384 start.go:928] validating driver "docker" against &{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.900493 1382384 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:29:09.901343 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.956323 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.947321156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.956662 1382384 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:09.956696 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:09.956755 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:09.956801 1382384 start.go:351] cluster config:
	{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.959906 1382384 out.go:179] * Starting "newest-cni-852936" primary control-plane node in "newest-cni-852936" cluster
	I1027 23:29:09.962722 1382384 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:29:09.965885 1382384 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:29:09.968839 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:09.968947 1382384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:29:09.968971 1382384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:29:09.968983 1382384 cache.go:59] Caching tarball of preloaded images
	I1027 23:29:09.969090 1382384 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:29:09.969100 1382384 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:29:09.969208 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:09.999805 1382384 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:29:09.999844 1382384 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:29:09.999859 1382384 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:29:09.999881 1382384 start.go:360] acquireMachinesLock for newest-cni-852936: {Name:mk3f294285068916d485e6bfcdad9561ce18d17d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:29:09.999976 1382384 start.go:364] duration metric: took 68.694µs to acquireMachinesLock for "newest-cni-852936"
	I1027 23:29:10.000031 1382384 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:29:10.000085 1382384 fix.go:55] fixHost starting: 
	I1027 23:29:10.000495 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.025325 1382384 fix.go:113] recreateIfNeeded on newest-cni-852936: state=Stopped err=<nil>
	W1027 23:29:10.025370 1382384 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 23:29:08.700022 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:11.196407 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:10.028623 1382384 out.go:252] * Restarting existing docker container for "newest-cni-852936" ...
	I1027 23:29:10.028792 1382384 cli_runner.go:164] Run: docker start newest-cni-852936
	I1027 23:29:10.308194 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.330658 1382384 kic.go:430] container "newest-cni-852936" state is running.
	I1027 23:29:10.331059 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:10.353242 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:10.353470 1382384 machine.go:94] provisionDockerMachine start ...
	I1027 23:29:10.353542 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:10.372326 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:10.372679 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:10.372697 1382384 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:29:10.373227 1382384 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54926->127.0.0.1:34604: read: connection reset by peer
	I1027 23:29:13.522271 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.522368 1382384 ubuntu.go:182] provisioning hostname "newest-cni-852936"
	I1027 23:29:13.522473 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.543423 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.543747 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.543767 1382384 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852936 && echo "newest-cni-852936" | sudo tee /etc/hostname
	I1027 23:29:13.705024 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.705100 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.724776 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.725087 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.725105 1382384 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852936/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:29:13.874768 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:29:13.874793 1382384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:29:13.874815 1382384 ubuntu.go:190] setting up certificates
	I1027 23:29:13.874826 1382384 provision.go:84] configureAuth start
	I1027 23:29:13.874883 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:13.897512 1382384 provision.go:143] copyHostCerts
	I1027 23:29:13.897574 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:29:13.897589 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:29:13.897665 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:29:13.897760 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:29:13.897765 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:29:13.897791 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:29:13.897849 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:29:13.897854 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:29:13.897875 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:29:13.897919 1382384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852936 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852936]
	I1027 23:29:14.197889 1382384 provision.go:177] copyRemoteCerts
	I1027 23:29:14.198003 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:29:14.198069 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.216790 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.322005 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:29:14.339619 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:29:14.357698 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:29:14.374994 1382384 provision.go:87] duration metric: took 500.144707ms to configureAuth
	I1027 23:29:14.375019 1382384 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:29:14.375217 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:14.375326 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.392639 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:14.392951 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:14.392965 1382384 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:29:14.687600 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:29:14.687621 1382384 machine.go:97] duration metric: took 4.334134462s to provisionDockerMachine
	I1027 23:29:14.687665 1382384 start.go:293] postStartSetup for "newest-cni-852936" (driver="docker")
	I1027 23:29:14.687685 1382384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:29:14.687758 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:29:14.687803 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.707820 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.810235 1382384 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:29:14.813577 1382384 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:29:14.813651 1382384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:29:14.813665 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:29:14.813736 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:29:14.813819 1382384 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:29:14.813926 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:29:14.821590 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:14.839199 1382384 start.go:296] duration metric: took 151.517291ms for postStartSetup
	I1027 23:29:14.839285 1382384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:29:14.839332 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.857380 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.963797 1382384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:29:14.968647 1382384 fix.go:57] duration metric: took 4.968601832s for fixHost
	I1027 23:29:14.968672 1382384 start.go:83] releasing machines lock for "newest-cni-852936", held for 4.96867508s
	I1027 23:29:14.968743 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:14.985572 1382384 ssh_runner.go:195] Run: cat /version.json
	I1027 23:29:14.985633 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.985873 1382384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:29:14.985939 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:15.005851 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.021224 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.134518 1382384 ssh_runner.go:195] Run: systemctl --version
	I1027 23:29:15.236918 1382384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:29:15.280309 1382384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:29:15.285018 1382384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:29:15.285087 1382384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:29:15.293768 1382384 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:29:15.293791 1382384 start.go:496] detecting cgroup driver to use...
	I1027 23:29:15.293821 1382384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:29:15.293867 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:29:15.309499 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:29:15.323058 1382384 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:29:15.323175 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:29:15.339572 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:29:15.354227 1382384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:29:15.468373 1382384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:29:15.591069 1382384 docker.go:234] disabling docker service ...
	I1027 23:29:15.591189 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:29:15.606878 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:29:15.620798 1382384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:29:15.748929 1382384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:29:15.872886 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:29:15.890660 1382384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:29:15.906654 1382384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:29:15.906761 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.916506 1382384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:29:15.916600 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.926592 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.936286 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.945124 1382384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:29:15.953537 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.962746 1382384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.971004 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.979956 1382384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:29:15.987602 1382384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:29:16.001973 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.135477 1382384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:29:16.286541 1382384 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:29:16.286667 1382384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:29:16.291239 1382384 start.go:564] Will wait 60s for crictl version
	I1027 23:29:16.291360 1382384 ssh_runner.go:195] Run: which crictl
	I1027 23:29:16.294882 1382384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:29:16.321680 1382384 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:29:16.321849 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.360828 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.393456 1382384 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:29:16.396391 1382384 cli_runner.go:164] Run: docker network inspect newest-cni-852936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:29:16.413033 1382384 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:29:16.416904 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.429883 1382384 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1027 23:29:13.697418 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:16.200317 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:16.432630 1382384 kubeadm.go:884] updating cluster {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:29:16.432775 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:16.432862 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.470089 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.470114 1382384 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:29:16.470176 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.502365 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.502412 1382384 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:29:16.502461 1382384 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:29:16.502589 1382384 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-852936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:29:16.502687 1382384 ssh_runner.go:195] Run: crio config
	I1027 23:29:16.576598 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:16.576620 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:16.576659 1382384 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 23:29:16.576689 1382384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-852936 NodeName:newest-cni-852936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:29:16.576834 1382384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-852936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:29:16.576908 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:29:16.584945 1382384 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:29:16.585026 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:29:16.592502 1382384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:29:16.605849 1382384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:29:16.620041 1382384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 23:29:16.633545 1382384 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:29:16.637404 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.648272 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.775190 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:16.792568 1382384 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936 for IP: 192.168.85.2
	I1027 23:29:16.792586 1382384 certs.go:195] generating shared ca certs ...
	I1027 23:29:16.792601 1382384 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:16.792765 1382384 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:29:16.792821 1382384 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:29:16.792833 1382384 certs.go:257] generating profile certs ...
	I1027 23:29:16.792916 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.key
	I1027 23:29:16.792993 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b
	I1027 23:29:16.793036 1382384 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key
	I1027 23:29:16.793150 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:29:16.793181 1382384 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:29:16.793202 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:29:16.793228 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:29:16.793255 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:29:16.793281 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:29:16.793330 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:16.793917 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:29:16.812607 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:29:16.829964 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:29:16.856222 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:29:16.873487 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:29:16.894161 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:29:16.922923 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:29:16.959397 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:29:17.006472 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:29:17.049337 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:29:17.081201 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:29:17.106034 1382384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:29:17.121728 1382384 ssh_runner.go:195] Run: openssl version
	I1027 23:29:17.129224 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:29:17.145507 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149674 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149765 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.196710 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:29:17.206114 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:29:17.214593 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218366 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218534 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.260208 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:29:17.268391 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:29:17.276997 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281271 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281338 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.323641 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:29:17.331756 1382384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:29:17.335672 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:29:17.382471 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:29:17.424359 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:29:17.467561 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:29:17.513139 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:29:17.567837 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:29:17.618470 1382384 kubeadm.go:401] StartCluster: {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:17.618617 1382384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:29:17.618713 1382384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:29:17.693163 1382384 cri.go:89] found id: ""
	I1027 23:29:17.693280 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:29:17.707954 1382384 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:29:17.708031 1382384 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:29:17.708118 1382384 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:29:17.719144 1382384 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:29:17.719791 1382384 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-852936" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.720118 1382384 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-852936" cluster setting kubeconfig missing "newest-cni-852936" context setting]
	I1027 23:29:17.720642 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.722636 1382384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:29:17.745586 1382384 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:29:17.745669 1382384 kubeadm.go:602] duration metric: took 37.617775ms to restartPrimaryControlPlane
	I1027 23:29:17.745694 1382384 kubeadm.go:403] duration metric: took 127.234259ms to StartCluster
	I1027 23:29:17.745742 1382384 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.745841 1382384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.746909 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.747200 1382384 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:29:17.747688 1382384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:29:17.747770 1382384 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-852936"
	I1027 23:29:17.747783 1382384 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-852936"
	W1027 23:29:17.747789 1382384 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:29:17.747811 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.748343 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.748641 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:17.748732 1382384 addons.go:69] Setting dashboard=true in profile "newest-cni-852936"
	I1027 23:29:17.748772 1382384 addons.go:238] Setting addon dashboard=true in "newest-cni-852936"
	W1027 23:29:17.748798 1382384 addons.go:247] addon dashboard should already be in state true
	I1027 23:29:17.748847 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.749340 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.749806 1382384 addons.go:69] Setting default-storageclass=true in profile "newest-cni-852936"
	I1027 23:29:17.749822 1382384 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852936"
	I1027 23:29:17.750092 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.759323 1382384 out.go:179] * Verifying Kubernetes components...
	I1027 23:29:17.772375 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:17.800819 1382384 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:29:17.801942 1382384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:29:17.806725 1382384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:17.806761 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:29:17.806795 1382384 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:29:17.806836 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.811489 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:29:17.811514 1382384 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:29:17.811591 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.822485 1382384 addons.go:238] Setting addon default-storageclass=true in "newest-cni-852936"
	W1027 23:29:17.822507 1382384 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:29:17.822532 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.822969 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.865449 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.877907 1382384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:17.877928 1382384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:29:17.877992 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.879738 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.900212 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:18.077474 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:18.149293 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:18.160724 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:18.236930 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:29:18.237002 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:29:18.328299 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:29:18.328364 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:29:18.383950 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:29:18.384014 1382384 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:29:18.408588 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:29:18.408653 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:29:18.442883 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:29:18.442954 1382384 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:29:18.464941 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:29:18.465009 1382384 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:29:18.491431 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:29:18.491509 1382384 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:29:18.511476 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:29:18.511545 1382384 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:29:18.536825 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:29:18.536903 1382384 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:29:18.559539 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1027 23:29:18.696525 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:20.699896 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:21.700112 1377654 pod_ready.go:94] pod "coredns-66bc5c9577-lzssb" is "Ready"
	I1027 23:29:21.700136 1377654 pod_ready.go:86] duration metric: took 36.009275195s for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.703421 1377654 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.715777 1377654 pod_ready.go:94] pod "etcd-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.715842 1377654 pod_ready.go:86] duration metric: took 12.348506ms for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.719027 1377654 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.728322 1377654 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.728398 1377654 pod_ready.go:86] duration metric: took 9.29462ms for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.732228 1377654 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.895924 1377654 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.896004 1377654 pod_ready.go:86] duration metric: took 163.695676ms for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.098328 1377654 pod_ready.go:83] waiting for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.494663 1377654 pod_ready.go:94] pod "kube-proxy-n4vzn" is "Ready"
	I1027 23:29:22.494740 1377654 pod_ready.go:86] duration metric: took 396.322861ms for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.694755 1377654 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095902 1377654 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:23.095941 1377654 pod_ready.go:86] duration metric: took 401.110104ms for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095954 1377654 pod_ready.go:40] duration metric: took 37.409990985s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:29:23.191426 1377654 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:23.194537 1377654 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-336451" cluster and "default" namespace by default
	I1027 23:29:23.865678 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.78812787s)
	I1027 23:29:23.865733 1382384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.716420658s)
	I1027 23:29:23.865764 1382384 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:29:23.865819 1382384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:29:23.865890 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.705145908s)
	I1027 23:29:23.866282 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.30666267s)
	I1027 23:29:23.869166 1382384 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-852936 addons enable metrics-server
	
	I1027 23:29:23.896432 1382384 api_server.go:72] duration metric: took 6.149164962s to wait for apiserver process to appear ...
	I1027 23:29:23.896452 1382384 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:29:23.896472 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:23.905254 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:29:23.905324 1382384 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:29:23.915351 1382384 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:29:23.918229 1382384 addons.go:514] duration metric: took 6.170528043s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:29:24.396619 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:24.404992 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:29:24.406155 1382384 api_server.go:141] control plane version: v1.34.1
	I1027 23:29:24.406180 1382384 api_server.go:131] duration metric: took 509.720774ms to wait for apiserver health ...
	I1027 23:29:24.406189 1382384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:29:24.409864 1382384 system_pods.go:59] 8 kube-system pods found
	I1027 23:29:24.409906 1382384 system_pods.go:61] "coredns-66bc5c9577-jzn5z" [191e4eff-7490-4e8a-9231-7e634396b226] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.409916 1382384 system_pods.go:61] "etcd-newest-cni-852936" [4d42a25f-5e7b-4657-a6f1-d46bc06216dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:29:24.409949 1382384 system_pods.go:61] "kindnet-q6tfx" [b3f08f81-257b-4bba-9acc-4b3c88d70bb7] Running
	I1027 23:29:24.409959 1382384 system_pods.go:61] "kube-apiserver-newest-cni-852936" [090b241c-c08c-4306-b40c-871e5421048b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:29:24.409967 1382384 system_pods.go:61] "kube-controller-manager-newest-cni-852936" [5016a35c-4906-416f-981d-3d8eafafac9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:29:24.409976 1382384 system_pods.go:61] "kube-proxy-qcz7m" [8263ca0a-34e2-4388-82ba-1714b8940cba] Running
	I1027 23:29:24.409988 1382384 system_pods.go:61] "kube-scheduler-newest-cni-852936" [4f47dc44-57da-47eb-b115-12f3d5bac007] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:29:24.409994 1382384 system_pods.go:61] "storage-provisioner" [ebb4e6b7-17b5-43ab-b54c-34a6b5b2caa2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.410017 1382384 system_pods.go:74] duration metric: took 3.807388ms to wait for pod list to return data ...
	I1027 23:29:24.410063 1382384 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:29:24.412702 1382384 default_sa.go:45] found service account: "default"
	I1027 23:29:24.412729 1382384 default_sa.go:55] duration metric: took 2.657145ms for default service account to be created ...
	I1027 23:29:24.412743 1382384 kubeadm.go:587] duration metric: took 6.665481562s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:24.412760 1382384 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:29:24.415832 1382384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:29:24.415864 1382384 node_conditions.go:123] node cpu capacity is 2
	I1027 23:29:24.415877 1382384 node_conditions.go:105] duration metric: took 3.112233ms to run NodePressure ...
	I1027 23:29:24.415891 1382384 start.go:242] waiting for startup goroutines ...
	I1027 23:29:24.415931 1382384 start.go:247] waiting for cluster config update ...
	I1027 23:29:24.415944 1382384 start.go:256] writing updated cluster config ...
	I1027 23:29:24.416251 1382384 ssh_runner.go:195] Run: rm -f paused
	I1027 23:29:24.473504 1382384 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:24.476808 1382384 out.go:179] * Done! kubectl is now configured to use "newest-cni-852936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.709444344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.713242478Z" level=info msg="Running pod sandbox: kube-system/kindnet-q6tfx/POD" id=5989fccc-9a0d-4922-9636-6adca3cc973e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.713679259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.716158416Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=006217a1-9a85-41da-9aa9-8973c2ad6903 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.733641303Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5989fccc-9a0d-4922-9636-6adca3cc973e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.736952841Z" level=info msg="Ran pod sandbox ac98f4ee737bac6331d902e3203be8998a0746f2130b093d82070abff99222e3 with infra container: kube-system/kube-proxy-qcz7m/POD" id=006217a1-9a85-41da-9aa9-8973c2ad6903 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.740999617Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3fb24a92-91f1-417d-9e8c-42e0e4ddd5f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.746715143Z" level=info msg="Ran pod sandbox c7accf6121c8bdb83946cf64da140bfaffe6caff774f83285db36d9c36c8e87a with infra container: kube-system/kindnet-q6tfx/POD" id=5989fccc-9a0d-4922-9636-6adca3cc973e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.749479137Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3a596dfe-a682-4f8e-a19f-565fb85e62ac name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.749870429Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=25ad7ef2-e5b0-4a14-864f-fc592b647119 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.753583383Z" level=info msg="Creating container: kube-system/kube-proxy-qcz7m/kube-proxy" id=f85b0974-706d-4779-86fb-a19657c0f7a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.754169607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.753875876Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d77a8a31-ade2-4311-9d1a-97b311d83c8d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.766788204Z" level=info msg="Creating container: kube-system/kindnet-q6tfx/kindnet-cni" id=a1a4730f-ebcd-49cf-b904-6338ebb52ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.767100118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.78011498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.780859679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.783440286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.787016196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.843969249Z" level=info msg="Created container d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d: kube-system/kindnet-q6tfx/kindnet-cni" id=a1a4730f-ebcd-49cf-b904-6338ebb52ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.844759889Z" level=info msg="Starting container: d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d" id=d5aa0a43-8a8e-4618-a94a-1d245286f01d name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.847825877Z" level=info msg="Started container" PID=1053 containerID=d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d description=kube-system/kindnet-q6tfx/kindnet-cni id=d5aa0a43-8a8e-4618-a94a-1d245286f01d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7accf6121c8bdb83946cf64da140bfaffe6caff774f83285db36d9c36c8e87a
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.910297595Z" level=info msg="Created container e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae: kube-system/kube-proxy-qcz7m/kube-proxy" id=f85b0974-706d-4779-86fb-a19657c0f7a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.917423504Z" level=info msg="Starting container: e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae" id=9cd087fb-5d6b-4871-a207-7e6194df99be name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:22 newest-cni-852936 crio[612]: time="2025-10-27T23:29:22.922899938Z" level=info msg="Started container" PID=1054 containerID=e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae description=kube-system/kube-proxy-qcz7m/kube-proxy id=9cd087fb-5d6b-4871-a207-7e6194df99be name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac98f4ee737bac6331d902e3203be8998a0746f2130b093d82070abff99222e3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e2a7c79144913       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   ac98f4ee737ba       kube-proxy-qcz7m                            kube-system
	d84aeb60c3d67       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   c7accf6121c8b       kindnet-q6tfx                               kube-system
	330dc9b597bf2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   44a7e9ca9bd38       kube-scheduler-newest-cni-852936            kube-system
	7ba655a45a78e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   74e8b8cb55b76       kube-controller-manager-newest-cni-852936   kube-system
	c24fe513253c6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   5c5ca8e1a7ef4       kube-apiserver-newest-cni-852936            kube-system
	88f79d403d4f7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   973cd30ccde51       etcd-newest-cni-852936                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-852936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-852936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=newest-cni-852936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_28_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:28:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-852936
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:29:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 23:29:22 +0000   Mon, 27 Oct 2025 23:28:47 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-852936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                92cc5ecf-38b1-42c9-8ddf-bd258bac7f0d
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-852936                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-q6tfx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-852936             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-852936    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-qcz7m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-852936             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x9 over 44s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x7 over 44s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-852936 event: Registered Node newest-cni-852936 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-852936 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-852936 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-852936 event: Registered Node newest-cni-852936 in Controller
	
	
	==> dmesg <==
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	[Oct27 23:28] overlayfs: idmapped layers are currently not supported
	[ +11.644898] overlayfs: idmapped layers are currently not supported
	[Oct27 23:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [88f79d403d4f728d053f809d89ffcfddf313be934b17854c1851af271cdcc8f3] <==
	{"level":"warn","ts":"2025-10-27T23:29:19.759958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.783497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.811796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.838334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.860180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.876609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.899563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.926589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.948573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:19.975840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.034820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.068943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.087052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.121796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.160334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.191064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.236305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.270089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.323041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.362564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.391700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.441389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.502283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.519667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:29:20.618707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45094","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:29:30 up  6:11,  0 user,  load average: 5.45, 4.51, 3.63
	Linux newest-cni-852936 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d84aeb60c3d677348b168a700554376d45dc7c3accb07b90ed78a7aeb9c54b4d] <==
	I1027 23:29:22.942619       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:29:22.948063       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 23:29:22.948207       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:29:22.948220       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:29:22.948231       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:29:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:29:23.169468       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:29:23.169494       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:29:23.169510       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:29:23.181064       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c24fe513253c6dc838d98980bfab0d60b8ee3c4899660c10d658adf5d75315be] <==
	I1027 23:29:22.236432       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:29:22.237063       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 23:29:22.237135       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 23:29:22.237165       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 23:29:22.237171       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 23:29:22.237248       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 23:29:22.237285       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 23:29:22.251787       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 23:29:22.251878       1 policy_source.go:240] refreshing policies
	I1027 23:29:22.287889       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:29:22.324652       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 23:29:22.324998       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:29:22.428330       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 23:29:22.433479       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:29:22.459481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:29:22.757150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 23:29:23.047481       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:29:23.212429       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:29:23.380806       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:29:23.445221       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:29:23.738356       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.83.219"}
	I1027 23:29:23.799625       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.153.86"}
	I1027 23:29:25.951584       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:29:26.294830       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 23:29:26.396085       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7ba655a45a78e5e901dbbfebe2a50cccb83aae7f62e5ff23596fb3ec81ccb126] <==
	I1027 23:29:25.979368       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 23:29:25.979592       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:29:25.979685       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 23:29:25.979718       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 23:29:25.979730       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 23:29:25.979736       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 23:29:25.982584       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 23:29:25.984027       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 23:29:25.986626       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:29:25.987506       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 23:29:25.987516       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 23:29:25.987538       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 23:29:25.987597       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 23:29:25.989980       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:29:25.995754       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 23:29:25.995956       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:29:25.996048       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 23:29:25.996174       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 23:29:25.996614       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:29:25.996089       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 23:29:25.998320       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-852936"
	I1027 23:29:25.998502       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 23:29:26.003651       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 23:29:26.008011       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 23:29:26.010982       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [e2a7c7914491369242dd969c692d5341b722cd50dd34558d711d71dbe029a0ae] <==
	I1027 23:29:23.415421       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:29:23.742710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:29:23.844997       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:29:23.854514       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 23:29:23.854639       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:29:23.901468       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:29:23.901526       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:29:23.913407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:29:23.913727       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:29:23.913739       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:29:23.916579       1 config.go:200] "Starting service config controller"
	I1027 23:29:23.916602       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:29:23.916623       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:29:23.916631       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:29:23.916643       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:29:23.916647       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:29:23.920620       1 config.go:309] "Starting node config controller"
	I1027 23:29:23.920641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:29:23.920650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:29:24.017577       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:29:24.017691       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:29:24.017771       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [330dc9b597bf25efc2a585d5e204a8122f12b9d06572abb8eca0714117e09773] <==
	I1027 23:29:19.997332       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:29:22.567995       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:29:22.568024       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:29:22.584244       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:29:22.584284       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:29:22.584353       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:29:22.584362       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:29:22.584376       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:29:22.584383       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:29:22.585493       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:29:22.585738       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:29:22.690837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:29:22.690905       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:29:22.690997       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:29:20 newest-cni-852936 kubelet[729]: E1027 23:29:20.117232     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-852936\" not found" node="newest-cni-852936"
	Oct 27 23:29:21 newest-cni-852936 kubelet[729]: E1027 23:29:21.110803     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-852936\" not found" node="newest-cni-852936"
	Oct 27 23:29:21 newest-cni-852936 kubelet[729]: I1027 23:29:21.898871     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-852936"
	Oct 27 23:29:21 newest-cni-852936 kubelet[729]: I1027 23:29:21.960658     729 apiserver.go:52] "Watching apiserver"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.182191     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276415     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-cni-cfg\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276481     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-lib-modules\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276514     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8263ca0a-34e2-4388-82ba-1714b8940cba-lib-modules\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276561     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3f08f81-257b-4bba-9acc-4b3c88d70bb7-xtables-lock\") pod \"kindnet-q6tfx\" (UID: \"b3f08f81-257b-4bba-9acc-4b3c88d70bb7\") " pod="kube-system/kindnet-q6tfx"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.276580     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8263ca0a-34e2-4388-82ba-1714b8940cba-xtables-lock\") pod \"kube-proxy-qcz7m\" (UID: \"8263ca0a-34e2-4388-82ba-1714b8940cba\") " pod="kube-system/kube-proxy-qcz7m"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.490964     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.504486     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.504594     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.504636     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.507031     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.531006     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-852936\" already exists" pod="kube-system/etcd-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.531042     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.573057     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-852936\" already exists" pod="kube-system/kube-apiserver-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.573094     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.686670     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-852936\" already exists" pod="kube-system/kube-controller-manager-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: I1027 23:29:22.686761     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-852936"
	Oct 27 23:29:22 newest-cni-852936 kubelet[729]: E1027 23:29:22.800427     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-852936\" already exists" pod="kube-system/kube-scheduler-newest-cni-852936"
	Oct 27 23:29:25 newest-cni-852936 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:29:25 newest-cni-852936 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:29:25 newest-cni-852936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-852936 -n newest-cni-852936
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-852936 -n newest-cni-852936: exit status 2 (350.602787ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-852936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z: exit status 1 (90.575562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jzn5z" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bc2g8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rkp9z" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-852936 describe pod coredns-66bc5c9577-jzn5z storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bc2g8 kubernetes-dashboard-855c9754f9-rkp9z: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-336451 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-336451 --alsologtostderr -v=1: exit status 80 (2.051000749s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-336451 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:29:35.132673 1385636 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:29:35.132827 1385636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:35.132841 1385636 out.go:374] Setting ErrFile to fd 2...
	I1027 23:29:35.132847 1385636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:35.133137 1385636 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:29:35.133482 1385636 out.go:368] Setting JSON to false
	I1027 23:29:35.133514 1385636 mustload.go:66] Loading cluster: default-k8s-diff-port-336451
	I1027 23:29:35.133964 1385636 config.go:182] Loaded profile config "default-k8s-diff-port-336451": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:35.134596 1385636 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-336451 --format={{.State.Status}}
	I1027 23:29:35.152467 1385636 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:29:35.152822 1385636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:35.214289 1385636 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:35.204190724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:35.215029 1385636 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761414747-21797/minikube-v1.37.0-1761414747-21797-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761414747-21797-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-336451 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 23:29:35.220557 1385636 out.go:179] * Pausing node default-k8s-diff-port-336451 ... 
	I1027 23:29:35.223618 1385636 host.go:66] Checking if "default-k8s-diff-port-336451" exists ...
	I1027 23:29:35.224025 1385636 ssh_runner.go:195] Run: systemctl --version
	I1027 23:29:35.224086 1385636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-336451
	I1027 23:29:35.242055 1385636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34599 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/default-k8s-diff-port-336451/id_rsa Username:docker}
	I1027 23:29:35.349270 1385636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:29:35.364337 1385636 pause.go:52] kubelet running: true
	I1027 23:29:35.364449 1385636 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:29:35.621871 1385636 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:29:35.621994 1385636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:29:35.699839 1385636 cri.go:89] found id: "c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee"
	I1027 23:29:35.699862 1385636 cri.go:89] found id: "fd096bbd312ce4ab42d6ec3b91f6f324ae5551679e881b224b3a5f4205916eee"
	I1027 23:29:35.699869 1385636 cri.go:89] found id: "e286d3355f877874f1258955d812cbe73eef79f899dbe2144abe0c20b709727a"
	I1027 23:29:35.699873 1385636 cri.go:89] found id: "d77a4209b5d8b6166e65f50776e9be005d032b980c041b2b25fb2f68396863f1"
	I1027 23:29:35.699876 1385636 cri.go:89] found id: "31fd7339c9b6866e0f75aa299a3f5f421e9b2e21a2e13ea31cc69466a502ee2c"
	I1027 23:29:35.699880 1385636 cri.go:89] found id: "7f66ec5899883992c1749593bfd4630c3ce8244c7e186676fa13e99cb58e4a03"
	I1027 23:29:35.699883 1385636 cri.go:89] found id: "e042d7ccfe395ac64bbfa1b1099e7ff453e4d67df7754503aac635f0f8ba71a8"
	I1027 23:29:35.699887 1385636 cri.go:89] found id: "69c1f90555bd0a08896702d72889b7cbea6dc8f6bf3d24bcc9936a63461f070f"
	I1027 23:29:35.699890 1385636 cri.go:89] found id: "ee6b21c638763f9bea06ed3eb613912563fe107d49320d174cfb911c51258b74"
	I1027 23:29:35.699896 1385636 cri.go:89] found id: "eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	I1027 23:29:35.699900 1385636 cri.go:89] found id: "d9cc060395e7c461eef94cb5f9bb56799fcbc841f9f373397f342e2d95f6b958"
	I1027 23:29:35.699904 1385636 cri.go:89] found id: ""
	I1027 23:29:35.699960 1385636 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:29:35.711782 1385636 retry.go:31] will retry after 335.770049ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:35Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:29:36.048435 1385636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:29:36.061843 1385636 pause.go:52] kubelet running: false
	I1027 23:29:36.061942 1385636 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:29:36.246242 1385636 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:29:36.246336 1385636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:29:36.319764 1385636 cri.go:89] found id: "c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee"
	I1027 23:29:36.319791 1385636 cri.go:89] found id: "fd096bbd312ce4ab42d6ec3b91f6f324ae5551679e881b224b3a5f4205916eee"
	I1027 23:29:36.319797 1385636 cri.go:89] found id: "e286d3355f877874f1258955d812cbe73eef79f899dbe2144abe0c20b709727a"
	I1027 23:29:36.319801 1385636 cri.go:89] found id: "d77a4209b5d8b6166e65f50776e9be005d032b980c041b2b25fb2f68396863f1"
	I1027 23:29:36.319805 1385636 cri.go:89] found id: "31fd7339c9b6866e0f75aa299a3f5f421e9b2e21a2e13ea31cc69466a502ee2c"
	I1027 23:29:36.319809 1385636 cri.go:89] found id: "7f66ec5899883992c1749593bfd4630c3ce8244c7e186676fa13e99cb58e4a03"
	I1027 23:29:36.319812 1385636 cri.go:89] found id: "e042d7ccfe395ac64bbfa1b1099e7ff453e4d67df7754503aac635f0f8ba71a8"
	I1027 23:29:36.319815 1385636 cri.go:89] found id: "69c1f90555bd0a08896702d72889b7cbea6dc8f6bf3d24bcc9936a63461f070f"
	I1027 23:29:36.319818 1385636 cri.go:89] found id: "ee6b21c638763f9bea06ed3eb613912563fe107d49320d174cfb911c51258b74"
	I1027 23:29:36.319824 1385636 cri.go:89] found id: "eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	I1027 23:29:36.319827 1385636 cri.go:89] found id: "d9cc060395e7c461eef94cb5f9bb56799fcbc841f9f373397f342e2d95f6b958"
	I1027 23:29:36.319830 1385636 cri.go:89] found id: ""
	I1027 23:29:36.319888 1385636 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:29:36.331098 1385636 retry.go:31] will retry after 480.158708ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 23:29:36.811780 1385636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 23:29:36.825461 1385636 pause.go:52] kubelet running: false
	I1027 23:29:36.825564 1385636 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 23:29:37.026748 1385636 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 23:29:37.026872 1385636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 23:29:37.093426 1385636 cri.go:89] found id: "c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee"
	I1027 23:29:37.093451 1385636 cri.go:89] found id: "fd096bbd312ce4ab42d6ec3b91f6f324ae5551679e881b224b3a5f4205916eee"
	I1027 23:29:37.093459 1385636 cri.go:89] found id: "e286d3355f877874f1258955d812cbe73eef79f899dbe2144abe0c20b709727a"
	I1027 23:29:37.093463 1385636 cri.go:89] found id: "d77a4209b5d8b6166e65f50776e9be005d032b980c041b2b25fb2f68396863f1"
	I1027 23:29:37.093466 1385636 cri.go:89] found id: "31fd7339c9b6866e0f75aa299a3f5f421e9b2e21a2e13ea31cc69466a502ee2c"
	I1027 23:29:37.093471 1385636 cri.go:89] found id: "7f66ec5899883992c1749593bfd4630c3ce8244c7e186676fa13e99cb58e4a03"
	I1027 23:29:37.093474 1385636 cri.go:89] found id: "e042d7ccfe395ac64bbfa1b1099e7ff453e4d67df7754503aac635f0f8ba71a8"
	I1027 23:29:37.093477 1385636 cri.go:89] found id: "69c1f90555bd0a08896702d72889b7cbea6dc8f6bf3d24bcc9936a63461f070f"
	I1027 23:29:37.093482 1385636 cri.go:89] found id: "ee6b21c638763f9bea06ed3eb613912563fe107d49320d174cfb911c51258b74"
	I1027 23:29:37.093492 1385636 cri.go:89] found id: "eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	I1027 23:29:37.093496 1385636 cri.go:89] found id: "d9cc060395e7c461eef94cb5f9bb56799fcbc841f9f373397f342e2d95f6b958"
	I1027 23:29:37.093500 1385636 cri.go:89] found id: ""
	I1027 23:29:37.093549 1385636 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 23:29:37.109668 1385636 out.go:203] 
	W1027 23:29:37.112865 1385636 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T23:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 23:29:37.112891 1385636 out.go:285] * 
	* 
	W1027 23:29:37.122624 1385636 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 23:29:37.126371 1385636 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-336451 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-336451
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-336451:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59",
	        "Created": "2025-10-27T23:26:41.328254644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1378056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:28:27.179421892Z",
	            "FinishedAt": "2025-10-27T23:28:25.838201393Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/hostname",
	        "HostsPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/hosts",
	        "LogPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59-json.log",
	        "Name": "/default-k8s-diff-port-336451",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-336451:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-336451",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59",
	                "LowerDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-336451",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-336451/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-336451",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-336451",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-336451",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "136be4ace32a72fc57cdb4e3941d14f7ae54c64988c6ef37260cf5b8a57ca5e4",
	            "SandboxKey": "/var/run/docker/netns/136be4ace32a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34603"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34601"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34602"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-336451": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4a:b9:62:d9:d4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "55da9c2196e319a24b4d34567d8cd7569236804748720d465d6d478b5766bd82",
	                    "EndpointID": "51d30e0e130ab355e8b31854de6e3628e4f7f114807ddc6cd221c98eb8b28a8c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-336451",
	                        "8835f98b0ace"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451: exit status 2 (353.287647ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-336451 logs -n 25
E1027 23:29:38.412011 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-336451 logs -n 25: (1.248431008s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-336451 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-336451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	│ stop    │ -p newest-cni-852936 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-852936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ image   │ newest-cni-852936 image list --format=json                                                                                                                                                                                                    │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ pause   │ -p newest-cni-852936 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	│ delete  │ -p newest-cni-852936                                                                                                                                                                                                                          │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ delete  │ -p newest-cni-852936                                                                                                                                                                                                                          │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ image   │ default-k8s-diff-port-336451 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ pause   │ -p default-k8s-diff-port-336451 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:29:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:29:09.766117 1382384 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:29:09.766264 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766276 1382384 out.go:374] Setting ErrFile to fd 2...
	I1027 23:29:09.766281 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766839 1382384 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:29:09.767786 1382384 out.go:368] Setting JSON to false
	I1027 23:29:09.769056 1382384 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22299,"bootTime":1761585451,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:29:09.769139 1382384 start.go:143] virtualization:  
	I1027 23:29:09.772858 1382384 out.go:179] * [newest-cni-852936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:29:09.776686 1382384 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:29:09.776813 1382384 notify.go:221] Checking for updates...
	I1027 23:29:09.782290 1382384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:29:09.785212 1382384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:09.788210 1382384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:29:09.791116 1382384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:29:09.793964 1382384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:29:09.797372 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:09.797914 1382384 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:29:09.833947 1382384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:29:09.834073 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.893931 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.878864517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.894062 1382384 docker.go:318] overlay module found
	I1027 23:29:09.897336 1382384 out.go:179] * Using the docker driver based on existing profile
	I1027 23:29:09.900341 1382384 start.go:307] selected driver: docker
	I1027 23:29:09.900381 1382384 start.go:928] validating driver "docker" against &{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.900493 1382384 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:29:09.901343 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.956323 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.947321156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.956662 1382384 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:09.956696 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:09.956755 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:09.956801 1382384 start.go:351] cluster config:
	{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.959906 1382384 out.go:179] * Starting "newest-cni-852936" primary control-plane node in "newest-cni-852936" cluster
	I1027 23:29:09.962722 1382384 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:29:09.965885 1382384 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:29:09.968839 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:09.968947 1382384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:29:09.968971 1382384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:29:09.968983 1382384 cache.go:59] Caching tarball of preloaded images
	I1027 23:29:09.969090 1382384 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:29:09.969100 1382384 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:29:09.969208 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:09.999805 1382384 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:29:09.999844 1382384 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:29:09.999859 1382384 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:29:09.999881 1382384 start.go:360] acquireMachinesLock for newest-cni-852936: {Name:mk3f294285068916d485e6bfcdad9561ce18d17d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:29:09.999976 1382384 start.go:364] duration metric: took 68.694µs to acquireMachinesLock for "newest-cni-852936"
	I1027 23:29:10.000031 1382384 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:29:10.000085 1382384 fix.go:55] fixHost starting: 
	I1027 23:29:10.000495 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.025325 1382384 fix.go:113] recreateIfNeeded on newest-cni-852936: state=Stopped err=<nil>
	W1027 23:29:10.025370 1382384 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 23:29:08.700022 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:11.196407 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:10.028623 1382384 out.go:252] * Restarting existing docker container for "newest-cni-852936" ...
	I1027 23:29:10.028792 1382384 cli_runner.go:164] Run: docker start newest-cni-852936
	I1027 23:29:10.308194 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.330658 1382384 kic.go:430] container "newest-cni-852936" state is running.
	I1027 23:29:10.331059 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:10.353242 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:10.353470 1382384 machine.go:94] provisionDockerMachine start ...
	I1027 23:29:10.353542 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:10.372326 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:10.372679 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:10.372697 1382384 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:29:10.373227 1382384 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54926->127.0.0.1:34604: read: connection reset by peer
	I1027 23:29:13.522271 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.522368 1382384 ubuntu.go:182] provisioning hostname "newest-cni-852936"
	I1027 23:29:13.522473 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.543423 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.543747 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.543767 1382384 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852936 && echo "newest-cni-852936" | sudo tee /etc/hostname
	I1027 23:29:13.705024 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.705100 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.724776 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.725087 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.725105 1382384 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852936/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:29:13.874768 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:29:13.874793 1382384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:29:13.874815 1382384 ubuntu.go:190] setting up certificates
	I1027 23:29:13.874826 1382384 provision.go:84] configureAuth start
	I1027 23:29:13.874883 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:13.897512 1382384 provision.go:143] copyHostCerts
	I1027 23:29:13.897574 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:29:13.897589 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:29:13.897665 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:29:13.897760 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:29:13.897765 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:29:13.897791 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:29:13.897849 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:29:13.897854 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:29:13.897875 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:29:13.897919 1382384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852936 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852936]
	I1027 23:29:14.197889 1382384 provision.go:177] copyRemoteCerts
	I1027 23:29:14.198003 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:29:14.198069 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.216790 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.322005 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:29:14.339619 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:29:14.357698 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:29:14.374994 1382384 provision.go:87] duration metric: took 500.144707ms to configureAuth
	I1027 23:29:14.375019 1382384 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:29:14.375217 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:14.375326 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.392639 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:14.392951 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:14.392965 1382384 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:29:14.687600 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:29:14.687621 1382384 machine.go:97] duration metric: took 4.334134462s to provisionDockerMachine
	I1027 23:29:14.687665 1382384 start.go:293] postStartSetup for "newest-cni-852936" (driver="docker")
	I1027 23:29:14.687685 1382384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:29:14.687758 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:29:14.687803 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.707820 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.810235 1382384 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:29:14.813577 1382384 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:29:14.813651 1382384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:29:14.813665 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:29:14.813736 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:29:14.813819 1382384 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:29:14.813926 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:29:14.821590 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:14.839199 1382384 start.go:296] duration metric: took 151.517291ms for postStartSetup
	I1027 23:29:14.839285 1382384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:29:14.839332 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.857380 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.963797 1382384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:29:14.968647 1382384 fix.go:57] duration metric: took 4.968601832s for fixHost
	I1027 23:29:14.968672 1382384 start.go:83] releasing machines lock for "newest-cni-852936", held for 4.96867508s
	I1027 23:29:14.968743 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:14.985572 1382384 ssh_runner.go:195] Run: cat /version.json
	I1027 23:29:14.985633 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.985873 1382384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:29:14.985939 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:15.005851 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.021224 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.134518 1382384 ssh_runner.go:195] Run: systemctl --version
	I1027 23:29:15.236918 1382384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:29:15.280309 1382384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:29:15.285018 1382384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:29:15.285087 1382384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:29:15.293768 1382384 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:29:15.293791 1382384 start.go:496] detecting cgroup driver to use...
	I1027 23:29:15.293821 1382384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:29:15.293867 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:29:15.309499 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:29:15.323058 1382384 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:29:15.323175 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:29:15.339572 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:29:15.354227 1382384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:29:15.468373 1382384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:29:15.591069 1382384 docker.go:234] disabling docker service ...
	I1027 23:29:15.591189 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:29:15.606878 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:29:15.620798 1382384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:29:15.748929 1382384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:29:15.872886 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:29:15.890660 1382384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:29:15.906654 1382384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:29:15.906761 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.916506 1382384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:29:15.916600 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.926592 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.936286 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.945124 1382384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:29:15.953537 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.962746 1382384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.971004 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.979956 1382384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:29:15.987602 1382384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:29:16.001973 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.135477 1382384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:29:16.286541 1382384 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:29:16.286667 1382384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:29:16.291239 1382384 start.go:564] Will wait 60s for crictl version
	I1027 23:29:16.291360 1382384 ssh_runner.go:195] Run: which crictl
	I1027 23:29:16.294882 1382384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:29:16.321680 1382384 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:29:16.321849 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.360828 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.393456 1382384 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:29:16.396391 1382384 cli_runner.go:164] Run: docker network inspect newest-cni-852936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:29:16.413033 1382384 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:29:16.416904 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.429883 1382384 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1027 23:29:13.697418 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:16.200317 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:16.432630 1382384 kubeadm.go:884] updating cluster {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:29:16.432775 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:16.432862 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.470089 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.470114 1382384 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:29:16.470176 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.502365 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.502412 1382384 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:29:16.502461 1382384 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:29:16.502589 1382384 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-852936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:29:16.502687 1382384 ssh_runner.go:195] Run: crio config
	I1027 23:29:16.576598 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:16.576620 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:16.576659 1382384 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 23:29:16.576689 1382384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-852936 NodeName:newest-cni-852936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:29:16.576834 1382384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-852936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:29:16.576908 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:29:16.584945 1382384 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:29:16.585026 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:29:16.592502 1382384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:29:16.605849 1382384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:29:16.620041 1382384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 23:29:16.633545 1382384 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:29:16.637404 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.648272 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.775190 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:16.792568 1382384 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936 for IP: 192.168.85.2
	I1027 23:29:16.792586 1382384 certs.go:195] generating shared ca certs ...
	I1027 23:29:16.792601 1382384 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:16.792765 1382384 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:29:16.792821 1382384 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:29:16.792833 1382384 certs.go:257] generating profile certs ...
	I1027 23:29:16.792916 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.key
	I1027 23:29:16.792993 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b
	I1027 23:29:16.793036 1382384 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key
	I1027 23:29:16.793150 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:29:16.793181 1382384 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:29:16.793202 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:29:16.793228 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:29:16.793255 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:29:16.793281 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:29:16.793330 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:16.793917 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:29:16.812607 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:29:16.829964 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:29:16.856222 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:29:16.873487 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:29:16.894161 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:29:16.922923 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:29:16.959397 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:29:17.006472 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:29:17.049337 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:29:17.081201 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:29:17.106034 1382384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:29:17.121728 1382384 ssh_runner.go:195] Run: openssl version
	I1027 23:29:17.129224 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:29:17.145507 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149674 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149765 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.196710 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:29:17.206114 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:29:17.214593 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218366 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218534 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.260208 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:29:17.268391 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:29:17.276997 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281271 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281338 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.323641 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:29:17.331756 1382384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:29:17.335672 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:29:17.382471 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:29:17.424359 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:29:17.467561 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:29:17.513139 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:29:17.567837 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:29:17.618470 1382384 kubeadm.go:401] StartCluster: {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:17.618617 1382384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:29:17.618713 1382384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:29:17.693163 1382384 cri.go:89] found id: ""
	I1027 23:29:17.693280 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:29:17.707954 1382384 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:29:17.708031 1382384 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:29:17.708118 1382384 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:29:17.719144 1382384 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:29:17.719791 1382384 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-852936" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.720118 1382384 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-852936" cluster setting kubeconfig missing "newest-cni-852936" context setting]
	I1027 23:29:17.720642 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.722636 1382384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:29:17.745586 1382384 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:29:17.745669 1382384 kubeadm.go:602] duration metric: took 37.617775ms to restartPrimaryControlPlane
	I1027 23:29:17.745694 1382384 kubeadm.go:403] duration metric: took 127.234259ms to StartCluster
	I1027 23:29:17.745742 1382384 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.745841 1382384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.746909 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.747200 1382384 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:29:17.747688 1382384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:29:17.747770 1382384 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-852936"
	I1027 23:29:17.747783 1382384 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-852936"
	W1027 23:29:17.747789 1382384 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:29:17.747811 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.748343 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.748641 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:17.748732 1382384 addons.go:69] Setting dashboard=true in profile "newest-cni-852936"
	I1027 23:29:17.748772 1382384 addons.go:238] Setting addon dashboard=true in "newest-cni-852936"
	W1027 23:29:17.748798 1382384 addons.go:247] addon dashboard should already be in state true
	I1027 23:29:17.748847 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.749340 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.749806 1382384 addons.go:69] Setting default-storageclass=true in profile "newest-cni-852936"
	I1027 23:29:17.749822 1382384 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852936"
	I1027 23:29:17.750092 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.759323 1382384 out.go:179] * Verifying Kubernetes components...
	I1027 23:29:17.772375 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:17.800819 1382384 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:29:17.801942 1382384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:29:17.806725 1382384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:17.806761 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:29:17.806795 1382384 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:29:17.806836 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.811489 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:29:17.811514 1382384 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:29:17.811591 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.822485 1382384 addons.go:238] Setting addon default-storageclass=true in "newest-cni-852936"
	W1027 23:29:17.822507 1382384 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:29:17.822532 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.822969 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.865449 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.877907 1382384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:17.877928 1382384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:29:17.877992 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.879738 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.900212 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:18.077474 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:18.149293 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:18.160724 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:18.236930 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:29:18.237002 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:29:18.328299 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:29:18.328364 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:29:18.383950 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:29:18.384014 1382384 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:29:18.408588 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:29:18.408653 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:29:18.442883 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:29:18.442954 1382384 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:29:18.464941 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:29:18.465009 1382384 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:29:18.491431 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:29:18.491509 1382384 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:29:18.511476 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:29:18.511545 1382384 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:29:18.536825 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:29:18.536903 1382384 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:29:18.559539 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1027 23:29:18.696525 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:20.699896 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:21.700112 1377654 pod_ready.go:94] pod "coredns-66bc5c9577-lzssb" is "Ready"
	I1027 23:29:21.700136 1377654 pod_ready.go:86] duration metric: took 36.009275195s for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.703421 1377654 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.715777 1377654 pod_ready.go:94] pod "etcd-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.715842 1377654 pod_ready.go:86] duration metric: took 12.348506ms for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.719027 1377654 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.728322 1377654 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.728398 1377654 pod_ready.go:86] duration metric: took 9.29462ms for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.732228 1377654 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.895924 1377654 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.896004 1377654 pod_ready.go:86] duration metric: took 163.695676ms for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.098328 1377654 pod_ready.go:83] waiting for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.494663 1377654 pod_ready.go:94] pod "kube-proxy-n4vzn" is "Ready"
	I1027 23:29:22.494740 1377654 pod_ready.go:86] duration metric: took 396.322861ms for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.694755 1377654 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095902 1377654 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:23.095941 1377654 pod_ready.go:86] duration metric: took 401.110104ms for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095954 1377654 pod_ready.go:40] duration metric: took 37.409990985s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:29:23.191426 1377654 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:23.194537 1377654 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-336451" cluster and "default" namespace by default
	I1027 23:29:23.865678 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.78812787s)
	I1027 23:29:23.865733 1382384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.716420658s)
	I1027 23:29:23.865764 1382384 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:29:23.865819 1382384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:29:23.865890 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.705145908s)
	I1027 23:29:23.866282 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.30666267s)
	I1027 23:29:23.869166 1382384 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-852936 addons enable metrics-server
	
	I1027 23:29:23.896432 1382384 api_server.go:72] duration metric: took 6.149164962s to wait for apiserver process to appear ...
	I1027 23:29:23.896452 1382384 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:29:23.896472 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:23.905254 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:29:23.905324 1382384 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:29:23.915351 1382384 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:29:23.918229 1382384 addons.go:514] duration metric: took 6.170528043s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:29:24.396619 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:24.404992 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:29:24.406155 1382384 api_server.go:141] control plane version: v1.34.1
	I1027 23:29:24.406180 1382384 api_server.go:131] duration metric: took 509.720774ms to wait for apiserver health ...
	I1027 23:29:24.406189 1382384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:29:24.409864 1382384 system_pods.go:59] 8 kube-system pods found
	I1027 23:29:24.409906 1382384 system_pods.go:61] "coredns-66bc5c9577-jzn5z" [191e4eff-7490-4e8a-9231-7e634396b226] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.409916 1382384 system_pods.go:61] "etcd-newest-cni-852936" [4d42a25f-5e7b-4657-a6f1-d46bc06216dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:29:24.409949 1382384 system_pods.go:61] "kindnet-q6tfx" [b3f08f81-257b-4bba-9acc-4b3c88d70bb7] Running
	I1027 23:29:24.409959 1382384 system_pods.go:61] "kube-apiserver-newest-cni-852936" [090b241c-c08c-4306-b40c-871e5421048b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:29:24.409967 1382384 system_pods.go:61] "kube-controller-manager-newest-cni-852936" [5016a35c-4906-416f-981d-3d8eafafac9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:29:24.409976 1382384 system_pods.go:61] "kube-proxy-qcz7m" [8263ca0a-34e2-4388-82ba-1714b8940cba] Running
	I1027 23:29:24.409988 1382384 system_pods.go:61] "kube-scheduler-newest-cni-852936" [4f47dc44-57da-47eb-b115-12f3d5bac007] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:29:24.409994 1382384 system_pods.go:61] "storage-provisioner" [ebb4e6b7-17b5-43ab-b54c-34a6b5b2caa2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.410017 1382384 system_pods.go:74] duration metric: took 3.807388ms to wait for pod list to return data ...
	I1027 23:29:24.410063 1382384 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:29:24.412702 1382384 default_sa.go:45] found service account: "default"
	I1027 23:29:24.412729 1382384 default_sa.go:55] duration metric: took 2.657145ms for default service account to be created ...
	I1027 23:29:24.412743 1382384 kubeadm.go:587] duration metric: took 6.665481562s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:24.412760 1382384 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:29:24.415832 1382384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:29:24.415864 1382384 node_conditions.go:123] node cpu capacity is 2
	I1027 23:29:24.415877 1382384 node_conditions.go:105] duration metric: took 3.112233ms to run NodePressure ...
	I1027 23:29:24.415891 1382384 start.go:242] waiting for startup goroutines ...
	I1027 23:29:24.415931 1382384 start.go:247] waiting for cluster config update ...
	I1027 23:29:24.415944 1382384 start.go:256] writing updated cluster config ...
	I1027 23:29:24.416251 1382384 ssh_runner.go:195] Run: rm -f paused
	I1027 23:29:24.473504 1382384 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:24.476808 1382384 out.go:179] * Done! kubectl is now configured to use "newest-cni-852936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:29:12 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:12.123367234Z" level=info msg="Removed container 177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms/dashboard-metrics-scraper" id=eeb18849-9cd2-4ca8-beb2-e8f83da78f9c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:29:15 default-k8s-diff-port-336451 conmon[1146]: conmon d77a4209b5d8b6166e65 <ninfo>: container 1153 exited with status 1
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.122668022Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d22a7c97-575e-4758-9307-aad152b9e71a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.124231555Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ea6b815-4017-43d4-9df2-4f5faad09b00 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.125242572Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b8fc6fef-9950-4572-a1a9-c4ca627aa3cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.125376014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.137808065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.138000716Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/94737289a9e6465aef3b7b36bb5b0875e4a56380e6a03c8f9b60b7717aeaa326/merged/etc/passwd: no such file or directory"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.138036819Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/94737289a9e6465aef3b7b36bb5b0875e4a56380e6a03c8f9b60b7717aeaa326/merged/etc/group: no such file or directory"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.139167838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.175864812Z" level=info msg="Created container c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee: kube-system/storage-provisioner/storage-provisioner" id=b8fc6fef-9950-4572-a1a9-c4ca627aa3cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.178710407Z" level=info msg="Starting container: c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee" id=9b385a48-b9d9-4bb0-8151-64d5b8aef4c9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.18357669Z" level=info msg="Started container" PID=1642 containerID=c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee description=kube-system/storage-provisioner/storage-provisioner id=9b385a48-b9d9-4bb0-8151-64d5b8aef4c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1620ca91315a8c9bdbb1959a770b2c16bdc4578621da56c94324efe2073e52ef
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.462928518Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.46687986Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.467040543Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.46712733Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.470973808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.471167977Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.471242653Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.474432228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.47457632Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.474660276Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.477858303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.477997423Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c63a21c878d68       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   1620ca91315a8       storage-provisioner                                    kube-system
	eaf10ad419dd1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   8c58ed4d8f432       dashboard-metrics-scraper-6ffb444bf9-m6dms             kubernetes-dashboard
	d9cc060395e7c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   155d0f05336af       kubernetes-dashboard-855c9754f9-9qnl7                  kubernetes-dashboard
	7fcdf13057a7d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   ece44cd15bf43       busybox                                                default
	fd096bbd312ce       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   9339e34ef2bfe       coredns-66bc5c9577-lzssb                               kube-system
	e286d3355f877       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   68b515cab5e27       kindnet-ht7mm                                          kube-system
	d77a4209b5d8b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   1620ca91315a8       storage-provisioner                                    kube-system
	31fd7339c9b68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   b126d22e46b90       kube-proxy-n4vzn                                       kube-system
	7f66ec5899883       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   75ca3ecbc6482       etcd-default-k8s-diff-port-336451                      kube-system
	e042d7ccfe395       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bb6bdcdeb02b4       kube-scheduler-default-k8s-diff-port-336451            kube-system
	69c1f90555bd0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   11d9d45a7f20e       kube-controller-manager-default-k8s-diff-port-336451   kube-system
	ee6b21c638763       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   2e8c85ce6acb7       kube-apiserver-default-k8s-diff-port-336451            kube-system
	
	
	==> coredns [fd096bbd312ce4ab42d6ec3b91f6f324ae5551679e881b224b3a5f4205916eee] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60260 - 9416 "HINFO IN 7204772191305620454.5806717990721506346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014237049s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-336451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-336451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=default-k8s-diff-port-336451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_27_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:27:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-336451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:29:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-336451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b39d5467-41ea-430a-8620-2c79f46d3819
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-lzssb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-336451                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-ht7mm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-336451             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-336451    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-n4vzn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-336451             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m6dms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9qnl7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m24s                  node-controller  Node default-k8s-diff-port-336451 event: Registered Node default-k8s-diff-port-336451 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-336451 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node default-k8s-diff-port-336451 event: Registered Node default-k8s-diff-port-336451 in Controller
	
	
	==> dmesg <==
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	[Oct27 23:28] overlayfs: idmapped layers are currently not supported
	[ +11.644898] overlayfs: idmapped layers are currently not supported
	[Oct27 23:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f66ec5899883992c1749593bfd4630c3ce8244c7e186676fa13e99cb58e4a03] <==
	{"level":"warn","ts":"2025-10-27T23:28:39.851445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.887103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.915321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.951291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.988698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.006987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.030608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.053250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.095550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.150163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.157963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.208519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.251402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.311310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.339714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.368247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.397258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.443140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.468672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.490612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.523075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.549836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.608593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.610503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.728843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:29:38 up  6:12,  0 user,  load average: 5.02, 4.43, 3.61
	Linux default-k8s-diff-port-336451 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e286d3355f877874f1258955d812cbe73eef79f899dbe2144abe0c20b709727a] <==
	I1027 23:28:45.148456       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:28:45.148730       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:28:45.148874       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:28:45.148894       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:28:45.148910       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:28:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:28:45.486697       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:28:45.486727       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:28:45.486737       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:28:45.487253       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:29:15.460712       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:29:15.487412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:29:15.487594       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:29:15.487763       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 23:29:16.986977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:29:16.987114       1 metrics.go:72] Registering metrics
	I1027 23:29:16.987211       1 controller.go:711] "Syncing nftables rules"
	I1027 23:29:25.462527       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:29:25.462644       1 main.go:301] handling current node
	I1027 23:29:35.469082       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:29:35.469120       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ee6b21c638763f9bea06ed3eb613912563fe107d49320d174cfb911c51258b74] <==
	I1027 23:28:42.569055       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 23:28:42.569190       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:28:42.570070       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:28:42.827049       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:28:42.857016       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 23:28:42.857050       1 policy_source.go:240] refreshing policies
	I1027 23:28:42.857242       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 23:28:42.857299       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:28:42.878550       1 aggregator.go:171] initial CRD sync complete...
	I1027 23:28:42.878578       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 23:28:42.878585       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:28:42.878593       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:28:42.893606       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:28:42.982528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1027 23:28:43.031819       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:28:43.835892       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:28:44.214513       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:28:44.582248       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:28:44.860958       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:28:44.880013       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:28:45.451144       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.54.133"}
	I1027 23:28:45.520515       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.84.158"}
	I1027 23:28:47.489891       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:28:47.828089       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:28:47.927808       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [69c1f90555bd0a08896702d72889b7cbea6dc8f6bf3d24bcc9936a63461f070f] <==
	I1027 23:28:47.443225       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:28:47.443277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:28:47.450438       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 23:28:47.452806       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 23:28:47.457165       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 23:28:47.464805       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:28:47.465964       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:28:47.466037       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 23:28:47.467275       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:28:47.467294       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:28:47.467303       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:28:47.468649       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 23:28:47.468692       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:28:47.471135       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 23:28:47.471278       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 23:28:47.471687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:28:47.475524       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:28:47.475615       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 23:28:47.477987       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 23:28:47.490149       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:28:47.490202       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 23:28:47.499364       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:28:47.511622       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:28:47.515853       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 23:28:47.526069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [31fd7339c9b6866e0f75aa299a3f5f421e9b2e21a2e13ea31cc69466a502ee2c] <==
	I1027 23:28:45.666272       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:28:45.763997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:28:45.877872       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:28:45.878114       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:28:45.878286       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:28:45.961858       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:28:45.961973       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:28:45.966103       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:28:45.966613       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:28:45.966812       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:28:45.968200       1 config.go:200] "Starting service config controller"
	I1027 23:28:45.968390       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:28:45.968444       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:28:45.968488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:28:45.968525       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:28:45.968553       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:28:45.969181       1 config.go:309] "Starting node config controller"
	I1027 23:28:45.971867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:28:45.971957       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:28:46.071116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:28:46.072216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:28:46.072340       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e042d7ccfe395ac64bbfa1b1099e7ff453e4d67df7754503aac635f0f8ba71a8] <==
	I1027 23:28:40.605174       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:28:45.010333       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:28:45.010375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:28:45.073248       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:28:45.073311       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:28:45.073378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:28:45.073390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:28:45.073404       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:28:45.073411       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:28:45.073768       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:28:45.073872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:28:45.187570       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:28:45.187644       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:28:45.187740       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:28:48 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:48.298344     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b7431c94-0d43-4b74-900a-1d361016710a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9qnl7\" (UID: \"b7431c94-0d43-4b74-900a-1d361016710a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9qnl7"
	Oct 27 23:28:48 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:48.298415     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhzrl\" (UniqueName: \"kubernetes.io/projected/b7431c94-0d43-4b74-900a-1d361016710a-kube-api-access-mhzrl\") pod \"kubernetes-dashboard-855c9754f9-9qnl7\" (UID: \"b7431c94-0d43-4b74-900a-1d361016710a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9qnl7"
	Oct 27 23:28:48 default-k8s-diff-port-336451 kubelet[777]: W1027 23:28:48.481244     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/crio-155d0f05336af5592b0a628082022e28e43783921de7e5d820531515052e42d1 WatchSource:0}: Error finding container 155d0f05336af5592b0a628082022e28e43783921de7e5d820531515052e42d1: Status 404 returned error can't find the container with id 155d0f05336af5592b0a628082022e28e43783921de7e5d820531515052e42d1
	Oct 27 23:28:51 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:51.585008     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 23:28:56 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:56.053440     777 scope.go:117] "RemoveContainer" containerID="26d0ab726831431d5e33718f92fc0965c0102605fffeadf80af47fd90644d24d"
	Oct 27 23:28:57 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:57.062452     777 scope.go:117] "RemoveContainer" containerID="26d0ab726831431d5e33718f92fc0965c0102605fffeadf80af47fd90644d24d"
	Oct 27 23:28:57 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:57.062979     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:28:57 default-k8s-diff-port-336451 kubelet[777]: E1027 23:28:57.063423     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:28:58 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:58.066663     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:28:58 default-k8s-diff-port-336451 kubelet[777]: E1027 23:28:58.066812     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:28:59 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:59.068989     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:28:59 default-k8s-diff-port-336451 kubelet[777]: E1027 23:28:59.069150     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:11 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:11.688557     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:12.108762     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:12.109093     777 scope.go:117] "RemoveContainer" containerID="eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: E1027 23:29:12.109355     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:12.131272     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9qnl7" podStartSLOduration=11.306867992 podStartE2EDuration="24.131255344s" podCreationTimestamp="2025-10-27 23:28:48 +0000 UTC" firstStartedPulling="2025-10-27 23:28:48.518838978 +0000 UTC m=+14.011908252" lastFinishedPulling="2025-10-27 23:29:01.34322633 +0000 UTC m=+26.836295604" observedRunningTime="2025-10-27 23:29:02.104389919 +0000 UTC m=+27.597459193" watchObservedRunningTime="2025-10-27 23:29:12.131255344 +0000 UTC m=+37.624324626"
	Oct 27 23:29:16 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:16.122116     777 scope.go:117] "RemoveContainer" containerID="d77a4209b5d8b6166e65f50776e9be005d032b980c041b2b25fb2f68396863f1"
	Oct 27 23:29:18 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:18.402828     777 scope.go:117] "RemoveContainer" containerID="eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	Oct 27 23:29:18 default-k8s-diff-port-336451 kubelet[777]: E1027 23:29:18.403002     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:29 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:29.689456     777 scope.go:117] "RemoveContainer" containerID="eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	Oct 27 23:29:29 default-k8s-diff-port-336451 kubelet[777]: E1027 23:29:29.690131     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:35 default-k8s-diff-port-336451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:29:35 default-k8s-diff-port-336451 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:29:35 default-k8s-diff-port-336451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d9cc060395e7c461eef94cb5f9bb56799fcbc841f9f373397f342e2d95f6b958] <==
	2025/10/27 23:29:01 Starting overwatch
	2025/10/27 23:29:01 Using namespace: kubernetes-dashboard
	2025/10/27 23:29:01 Using in-cluster config to connect to apiserver
	2025/10/27 23:29:01 Using secret token for csrf signing
	2025/10/27 23:29:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:29:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:29:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 23:29:01 Generating JWE encryption key
	2025/10/27 23:29:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:29:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:29:02 Initializing JWE encryption key from synchronized object
	2025/10/27 23:29:02 Creating in-cluster Sidecar client
	2025/10/27 23:29:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:29:02 Serving insecurely on HTTP port: 9090
	2025/10/27 23:29:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee] <==
	I1027 23:29:16.208369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:29:16.233527       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:29:16.233749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:29:16.237275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:19.698752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:23.959506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:27.558522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:30.612227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:33.635125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:33.646540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:29:33.646832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:29:33.647127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336451_426f74d4-a3b4-4edf-aacf-2b514f271032!
	I1027 23:29:33.648604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2176cbc4-0409-4665-84bd-c2de79a00ad7", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-336451_426f74d4-a3b4-4edf-aacf-2b514f271032 became leader
	W1027 23:29:33.661421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:33.665156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:29:33.747619       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336451_426f74d4-a3b4-4edf-aacf-2b514f271032!
	W1027 23:29:35.667960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:35.673580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:37.677039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:37.682004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d77a4209b5d8b6166e65f50776e9be005d032b980c041b2b25fb2f68396863f1] <==
	I1027 23:28:45.149792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:29:15.176827       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451: exit status 2 (368.218573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-336451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-336451
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-336451:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59",
	        "Created": "2025-10-27T23:26:41.328254644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1378056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T23:28:27.179421892Z",
	            "FinishedAt": "2025-10-27T23:28:25.838201393Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/hostname",
	        "HostsPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/hosts",
	        "LogPath": "/var/lib/docker/containers/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59-json.log",
	        "Name": "/default-k8s-diff-port-336451",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-336451:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-336451",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59",
	                "LowerDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6-init/diff:/var/lib/docker/overlay2/834b3bd35045dd91ff7c2af01ce767a59052be3eb48635ca7905541335c632d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db307246a30588d0ae121c4ec53a2353a232f31a81ee681f92ae6a0a6bc49dc6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-336451",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-336451/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-336451",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-336451",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-336451",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "136be4ace32a72fc57cdb4e3941d14f7ae54c64988c6ef37260cf5b8a57ca5e4",
	            "SandboxKey": "/var/run/docker/netns/136be4ace32a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34603"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34601"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34602"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-336451": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:4a:b9:62:d9:d4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "55da9c2196e319a24b4d34567d8cd7569236804748720d465d6d478b5766bd82",
	                    "EndpointID": "51d30e0e130ab355e8b31854de6e3628e4f7f114807ddc6cd221c98eb8b28a8c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-336451",
	                        "8835f98b0ace"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451: exit status 2 (345.750097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-336451 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-336451 logs -n 25: (1.287548199s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-247293                                                                                                                                                                                                               │ disable-driver-mounts-247293 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:28 UTC │
	│ addons  │ enable metrics-server -p embed-certs-790322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │                     │
	│ stop    │ -p embed-certs-790322 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:26 UTC │
	│ start   │ -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:26 UTC │ 27 Oct 25 23:27 UTC │
	│ image   │ embed-certs-790322 image list --format=json                                                                                                                                                                                                   │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ pause   │ -p embed-certs-790322 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-336451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-336451 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ delete  │ -p embed-certs-790322                                                                                                                                                                                                                         │ embed-certs-790322           │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-336451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:28 UTC │
	│ start   │ -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:28 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable metrics-server -p newest-cni-852936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	│ stop    │ -p newest-cni-852936 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ addons  │ enable dashboard -p newest-cni-852936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ start   │ -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ image   │ newest-cni-852936 image list --format=json                                                                                                                                                                                                    │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ pause   │ -p newest-cni-852936 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	│ delete  │ -p newest-cni-852936                                                                                                                                                                                                                          │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ delete  │ -p newest-cni-852936                                                                                                                                                                                                                          │ newest-cni-852936            │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ image   │ default-k8s-diff-port-336451 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │ 27 Oct 25 23:29 UTC │
	│ pause   │ -p default-k8s-diff-port-336451 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-336451 │ jenkins │ v1.37.0 │ 27 Oct 25 23:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 23:29:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 23:29:09.766117 1382384 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:29:09.766264 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766276 1382384 out.go:374] Setting ErrFile to fd 2...
	I1027 23:29:09.766281 1382384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:29:09.766839 1382384 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:29:09.767786 1382384 out.go:368] Setting JSON to false
	I1027 23:29:09.769056 1382384 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22299,"bootTime":1761585451,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:29:09.769139 1382384 start.go:143] virtualization:  
	I1027 23:29:09.772858 1382384 out.go:179] * [newest-cni-852936] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:29:09.776686 1382384 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:29:09.776813 1382384 notify.go:221] Checking for updates...
	I1027 23:29:09.782290 1382384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:29:09.785212 1382384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:09.788210 1382384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:29:09.791116 1382384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:29:09.793964 1382384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:29:09.797372 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:09.797914 1382384 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:29:09.833947 1382384 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:29:09.834073 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.893931 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.878864517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.894062 1382384 docker.go:318] overlay module found
	I1027 23:29:09.897336 1382384 out.go:179] * Using the docker driver based on existing profile
	I1027 23:29:09.900341 1382384 start.go:307] selected driver: docker
	I1027 23:29:09.900381 1382384 start.go:928] validating driver "docker" against &{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.900493 1382384 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:29:09.901343 1382384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:29:09.956323 1382384 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:29:09.947321156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:29:09.956662 1382384 start_flags.go:1010] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:09.956696 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:09.956755 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:09.956801 1382384 start.go:351] cluster config:
	{Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:09.959906 1382384 out.go:179] * Starting "newest-cni-852936" primary control-plane node in "newest-cni-852936" cluster
	I1027 23:29:09.962722 1382384 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 23:29:09.965885 1382384 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 23:29:09.968839 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:09.968947 1382384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 23:29:09.968971 1382384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 23:29:09.968983 1382384 cache.go:59] Caching tarball of preloaded images
	I1027 23:29:09.969090 1382384 preload.go:233] Found /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 23:29:09.969100 1382384 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 23:29:09.969208 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:09.999805 1382384 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 23:29:09.999844 1382384 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 23:29:09.999859 1382384 cache.go:233] Successfully downloaded all kic artifacts
	I1027 23:29:09.999881 1382384 start.go:360] acquireMachinesLock for newest-cni-852936: {Name:mk3f294285068916d485e6bfcdad9561ce18d17d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 23:29:09.999976 1382384 start.go:364] duration metric: took 68.694µs to acquireMachinesLock for "newest-cni-852936"
	I1027 23:29:10.000031 1382384 start.go:96] Skipping create...Using existing machine configuration
	I1027 23:29:10.000085 1382384 fix.go:55] fixHost starting: 
	I1027 23:29:10.000495 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.025325 1382384 fix.go:113] recreateIfNeeded on newest-cni-852936: state=Stopped err=<nil>
	W1027 23:29:10.025370 1382384 fix.go:139] unexpected machine state, will restart: <nil>
	W1027 23:29:08.700022 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:11.196407 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:10.028623 1382384 out.go:252] * Restarting existing docker container for "newest-cni-852936" ...
	I1027 23:29:10.028792 1382384 cli_runner.go:164] Run: docker start newest-cni-852936
	I1027 23:29:10.308194 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:10.330658 1382384 kic.go:430] container "newest-cni-852936" state is running.
	I1027 23:29:10.331059 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:10.353242 1382384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/config.json ...
	I1027 23:29:10.353470 1382384 machine.go:94] provisionDockerMachine start ...
	I1027 23:29:10.353542 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:10.372326 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:10.372679 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:10.372697 1382384 main.go:143] libmachine: About to run SSH command:
	hostname
	I1027 23:29:10.373227 1382384 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54926->127.0.0.1:34604: read: connection reset by peer
	I1027 23:29:13.522271 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.522368 1382384 ubuntu.go:182] provisioning hostname "newest-cni-852936"
	I1027 23:29:13.522473 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.543423 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.543747 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.543767 1382384 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-852936 && echo "newest-cni-852936" | sudo tee /etc/hostname
	I1027 23:29:13.705024 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-852936
	
	I1027 23:29:13.705100 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:13.724776 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:13.725087 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:13.725105 1382384 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-852936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-852936/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-852936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 23:29:13.874768 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1027 23:29:13.874793 1382384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21790-1132878/.minikube CaCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21790-1132878/.minikube}
	I1027 23:29:13.874815 1382384 ubuntu.go:190] setting up certificates
	I1027 23:29:13.874826 1382384 provision.go:84] configureAuth start
	I1027 23:29:13.874883 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:13.897512 1382384 provision.go:143] copyHostCerts
	I1027 23:29:13.897574 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem, removing ...
	I1027 23:29:13.897589 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem
	I1027 23:29:13.897665 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.pem (1082 bytes)
	I1027 23:29:13.897760 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem, removing ...
	I1027 23:29:13.897765 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem
	I1027 23:29:13.897791 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/cert.pem (1123 bytes)
	I1027 23:29:13.897849 1382384 exec_runner.go:144] found /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem, removing ...
	I1027 23:29:13.897854 1382384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem
	I1027 23:29:13.897875 1382384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21790-1132878/.minikube/key.pem (1675 bytes)
	I1027 23:29:13.897919 1382384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem org=jenkins.newest-cni-852936 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-852936]
	I1027 23:29:14.197889 1382384 provision.go:177] copyRemoteCerts
	I1027 23:29:14.198003 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 23:29:14.198069 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.216790 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.322005 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1027 23:29:14.339619 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 23:29:14.357698 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 23:29:14.374994 1382384 provision.go:87] duration metric: took 500.144707ms to configureAuth
	I1027 23:29:14.375019 1382384 ubuntu.go:206] setting minikube options for container-runtime
	I1027 23:29:14.375217 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:14.375326 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.392639 1382384 main.go:143] libmachine: Using SSH client type: native
	I1027 23:29:14.392951 1382384 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34604 <nil> <nil>}
	I1027 23:29:14.392965 1382384 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 23:29:14.687600 1382384 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 23:29:14.687621 1382384 machine.go:97] duration metric: took 4.334134462s to provisionDockerMachine
	I1027 23:29:14.687665 1382384 start.go:293] postStartSetup for "newest-cni-852936" (driver="docker")
	I1027 23:29:14.687685 1382384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 23:29:14.687758 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 23:29:14.687803 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.707820 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.810235 1382384 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 23:29:14.813577 1382384 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 23:29:14.813651 1382384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 23:29:14.813665 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/addons for local assets ...
	I1027 23:29:14.813736 1382384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21790-1132878/.minikube/files for local assets ...
	I1027 23:29:14.813819 1382384 filesync.go:149] local asset: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem -> 11347352.pem in /etc/ssl/certs
	I1027 23:29:14.813926 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 23:29:14.821590 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:14.839199 1382384 start.go:296] duration metric: took 151.517291ms for postStartSetup
	I1027 23:29:14.839285 1382384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 23:29:14.839332 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.857380 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:14.963797 1382384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 23:29:14.968647 1382384 fix.go:57] duration metric: took 4.968601832s for fixHost
	I1027 23:29:14.968672 1382384 start.go:83] releasing machines lock for "newest-cni-852936", held for 4.96867508s
	I1027 23:29:14.968743 1382384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-852936
	I1027 23:29:14.985572 1382384 ssh_runner.go:195] Run: cat /version.json
	I1027 23:29:14.985633 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:14.985873 1382384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 23:29:14.985939 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:15.005851 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.021224 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:15.134518 1382384 ssh_runner.go:195] Run: systemctl --version
	I1027 23:29:15.236918 1382384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 23:29:15.280309 1382384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 23:29:15.285018 1382384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 23:29:15.285087 1382384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 23:29:15.293768 1382384 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 23:29:15.293791 1382384 start.go:496] detecting cgroup driver to use...
	I1027 23:29:15.293821 1382384 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 23:29:15.293867 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 23:29:15.309499 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 23:29:15.323058 1382384 docker.go:218] disabling cri-docker service (if available) ...
	I1027 23:29:15.323175 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 23:29:15.339572 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 23:29:15.354227 1382384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 23:29:15.468373 1382384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 23:29:15.591069 1382384 docker.go:234] disabling docker service ...
	I1027 23:29:15.591189 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 23:29:15.606878 1382384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 23:29:15.620798 1382384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 23:29:15.748929 1382384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 23:29:15.872886 1382384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 23:29:15.890660 1382384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 23:29:15.906654 1382384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 23:29:15.906761 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.916506 1382384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 23:29:15.916600 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.926592 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.936286 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.945124 1382384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 23:29:15.953537 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.962746 1382384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.971004 1382384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 23:29:15.979956 1382384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 23:29:15.987602 1382384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 23:29:16.001973 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.135477 1382384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 23:29:16.286541 1382384 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 23:29:16.286667 1382384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 23:29:16.291239 1382384 start.go:564] Will wait 60s for crictl version
	I1027 23:29:16.291360 1382384 ssh_runner.go:195] Run: which crictl
	I1027 23:29:16.294882 1382384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 23:29:16.321680 1382384 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 23:29:16.321849 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.360828 1382384 ssh_runner.go:195] Run: crio --version
	I1027 23:29:16.393456 1382384 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 23:29:16.396391 1382384 cli_runner.go:164] Run: docker network inspect newest-cni-852936 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 23:29:16.413033 1382384 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 23:29:16.416904 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.429883 1382384 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1027 23:29:13.697418 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:16.200317 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:16.432630 1382384 kubeadm.go:884] updating cluster {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 23:29:16.432775 1382384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 23:29:16.432862 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.470089 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.470114 1382384 crio.go:433] Images already preloaded, skipping extraction
	I1027 23:29:16.470176 1382384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 23:29:16.502365 1382384 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 23:29:16.502412 1382384 cache_images.go:86] Images are preloaded, skipping loading
	I1027 23:29:16.502461 1382384 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 23:29:16.502589 1382384 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-852936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 23:29:16.502687 1382384 ssh_runner.go:195] Run: crio config
	I1027 23:29:16.576598 1382384 cni.go:84] Creating CNI manager for ""
	I1027 23:29:16.576620 1382384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 23:29:16.576659 1382384 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 23:29:16.576689 1382384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-852936 NodeName:newest-cni-852936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 23:29:16.576834 1382384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-852936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 23:29:16.576908 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 23:29:16.584945 1382384 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 23:29:16.585026 1382384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 23:29:16.592502 1382384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 23:29:16.605849 1382384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 23:29:16.620041 1382384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 23:29:16.633545 1382384 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 23:29:16.637404 1382384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 23:29:16.648272 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:16.775190 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:16.792568 1382384 certs.go:69] Setting up /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936 for IP: 192.168.85.2
	I1027 23:29:16.792586 1382384 certs.go:195] generating shared ca certs ...
	I1027 23:29:16.792601 1382384 certs.go:227] acquiring lock for ca certs: {Name:mk68d2d80ea72a7d936ed7b9721a4e350309fb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:16.792765 1382384 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key
	I1027 23:29:16.792821 1382384 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key
	I1027 23:29:16.792833 1382384 certs.go:257] generating profile certs ...
	I1027 23:29:16.792916 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/client.key
	I1027 23:29:16.792993 1382384 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key.7d12570b
	I1027 23:29:16.793036 1382384 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key
	I1027 23:29:16.793150 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem (1338 bytes)
	W1027 23:29:16.793181 1382384 certs.go:480] ignoring /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735_empty.pem, impossibly tiny 0 bytes
	I1027 23:29:16.793202 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 23:29:16.793228 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/ca.pem (1082 bytes)
	I1027 23:29:16.793255 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/cert.pem (1123 bytes)
	I1027 23:29:16.793281 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/key.pem (1675 bytes)
	I1027 23:29:16.793330 1382384 certs.go:484] found cert: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem (1708 bytes)
	I1027 23:29:16.793917 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 23:29:16.812607 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 23:29:16.829964 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 23:29:16.856222 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 23:29:16.873487 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 23:29:16.894161 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 23:29:16.922923 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 23:29:16.959397 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/newest-cni-852936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 23:29:17.006472 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 23:29:17.049337 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/certs/1134735.pem --> /usr/share/ca-certificates/1134735.pem (1338 bytes)
	I1027 23:29:17.081201 1382384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/ssl/certs/11347352.pem --> /usr/share/ca-certificates/11347352.pem (1708 bytes)
	I1027 23:29:17.106034 1382384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 23:29:17.121728 1382384 ssh_runner.go:195] Run: openssl version
	I1027 23:29:17.129224 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1134735.pem && ln -fs /usr/share/ca-certificates/1134735.pem /etc/ssl/certs/1134735.pem"
	I1027 23:29:17.145507 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149674 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 22:23 /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.149765 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1134735.pem
	I1027 23:29:17.196710 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1134735.pem /etc/ssl/certs/51391683.0"
	I1027 23:29:17.206114 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11347352.pem && ln -fs /usr/share/ca-certificates/11347352.pem /etc/ssl/certs/11347352.pem"
	I1027 23:29:17.214593 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218366 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 22:23 /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.218534 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11347352.pem
	I1027 23:29:17.260208 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11347352.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 23:29:17.268391 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 23:29:17.276997 1382384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281271 1382384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 22:17 /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.281338 1382384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 23:29:17.323641 1382384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 23:29:17.331756 1382384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 23:29:17.335672 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 23:29:17.382471 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 23:29:17.424359 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 23:29:17.467561 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 23:29:17.513139 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 23:29:17.567837 1382384 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 23:29:17.618470 1382384 kubeadm.go:401] StartCluster: {Name:newest-cni-852936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-852936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 23:29:17.618617 1382384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 23:29:17.618713 1382384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 23:29:17.693163 1382384 cri.go:89] found id: ""
	I1027 23:29:17.693280 1382384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 23:29:17.707954 1382384 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1027 23:29:17.708031 1382384 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1027 23:29:17.708118 1382384 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 23:29:17.719144 1382384 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 23:29:17.719791 1382384 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-852936" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.720118 1382384 kubeconfig.go:62] /home/jenkins/minikube-integration/21790-1132878/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-852936" cluster setting kubeconfig missing "newest-cni-852936" context setting]
	I1027 23:29:17.720642 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.722636 1382384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 23:29:17.745586 1382384 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 23:29:17.745669 1382384 kubeadm.go:602] duration metric: took 37.617775ms to restartPrimaryControlPlane
	I1027 23:29:17.745694 1382384 kubeadm.go:403] duration metric: took 127.234259ms to StartCluster
	I1027 23:29:17.745742 1382384 settings.go:142] acquiring lock: {Name:mk86c9715754698328ecfa501614c702ab8751a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.745841 1382384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:29:17.746909 1382384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21790-1132878/kubeconfig: {Name:mkf132c82ff85bc4604f03eb3e38c3e47d575b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 23:29:17.747200 1382384 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 23:29:17.747688 1382384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 23:29:17.747770 1382384 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-852936"
	I1027 23:29:17.747783 1382384 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-852936"
	W1027 23:29:17.747789 1382384 addons.go:247] addon storage-provisioner should already be in state true
	I1027 23:29:17.747811 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.748343 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.748641 1382384 config.go:182] Loaded profile config "newest-cni-852936": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:29:17.748732 1382384 addons.go:69] Setting dashboard=true in profile "newest-cni-852936"
	I1027 23:29:17.748772 1382384 addons.go:238] Setting addon dashboard=true in "newest-cni-852936"
	W1027 23:29:17.748798 1382384 addons.go:247] addon dashboard should already be in state true
	I1027 23:29:17.748847 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.749340 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.749806 1382384 addons.go:69] Setting default-storageclass=true in profile "newest-cni-852936"
	I1027 23:29:17.749822 1382384 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-852936"
	I1027 23:29:17.750092 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.759323 1382384 out.go:179] * Verifying Kubernetes components...
	I1027 23:29:17.772375 1382384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 23:29:17.800819 1382384 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 23:29:17.801942 1382384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 23:29:17.806725 1382384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:17.806761 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 23:29:17.806795 1382384 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 23:29:17.806836 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.811489 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 23:29:17.811514 1382384 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 23:29:17.811591 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.822485 1382384 addons.go:238] Setting addon default-storageclass=true in "newest-cni-852936"
	W1027 23:29:17.822507 1382384 addons.go:247] addon default-storageclass should already be in state true
	I1027 23:29:17.822532 1382384 host.go:66] Checking if "newest-cni-852936" exists ...
	I1027 23:29:17.822969 1382384 cli_runner.go:164] Run: docker container inspect newest-cni-852936 --format={{.State.Status}}
	I1027 23:29:17.865449 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.877907 1382384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:17.877928 1382384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 23:29:17.877992 1382384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-852936
	I1027 23:29:17.879738 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:17.900212 1382384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34604 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/newest-cni-852936/id_rsa Username:docker}
	I1027 23:29:18.077474 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 23:29:18.149293 1382384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 23:29:18.160724 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 23:29:18.236930 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 23:29:18.237002 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 23:29:18.328299 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 23:29:18.328364 1382384 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 23:29:18.383950 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 23:29:18.384014 1382384 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 23:29:18.408588 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 23:29:18.408653 1382384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 23:29:18.442883 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 23:29:18.442954 1382384 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 23:29:18.464941 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 23:29:18.465009 1382384 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 23:29:18.491431 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 23:29:18.491509 1382384 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 23:29:18.511476 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 23:29:18.511545 1382384 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 23:29:18.536825 1382384 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 23:29:18.536903 1382384 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 23:29:18.559539 1382384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1027 23:29:18.696525 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	W1027 23:29:20.699896 1377654 pod_ready.go:104] pod "coredns-66bc5c9577-lzssb" is not "Ready", error: <nil>
	I1027 23:29:21.700112 1377654 pod_ready.go:94] pod "coredns-66bc5c9577-lzssb" is "Ready"
	I1027 23:29:21.700136 1377654 pod_ready.go:86] duration metric: took 36.009275195s for pod "coredns-66bc5c9577-lzssb" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.703421 1377654 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.715777 1377654 pod_ready.go:94] pod "etcd-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.715842 1377654 pod_ready.go:86] duration metric: took 12.348506ms for pod "etcd-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.719027 1377654 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.728322 1377654 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.728398 1377654 pod_ready.go:86] duration metric: took 9.29462ms for pod "kube-apiserver-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.732228 1377654 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:21.895924 1377654 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:21.896004 1377654 pod_ready.go:86] duration metric: took 163.695676ms for pod "kube-controller-manager-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.098328 1377654 pod_ready.go:83] waiting for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.494663 1377654 pod_ready.go:94] pod "kube-proxy-n4vzn" is "Ready"
	I1027 23:29:22.494740 1377654 pod_ready.go:86] duration metric: took 396.322861ms for pod "kube-proxy-n4vzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:22.694755 1377654 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095902 1377654 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-336451" is "Ready"
	I1027 23:29:23.095941 1377654 pod_ready.go:86] duration metric: took 401.110104ms for pod "kube-scheduler-default-k8s-diff-port-336451" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 23:29:23.095954 1377654 pod_ready.go:40] duration metric: took 37.409990985s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 23:29:23.191426 1377654 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:23.194537 1377654 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-336451" cluster and "default" namespace by default
	I1027 23:29:23.865678 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.78812787s)
	I1027 23:29:23.865733 1382384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.716420658s)
	I1027 23:29:23.865764 1382384 api_server.go:52] waiting for apiserver process to appear ...
	I1027 23:29:23.865819 1382384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 23:29:23.865890 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.705145908s)
	I1027 23:29:23.866282 1382384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.30666267s)
	I1027 23:29:23.869166 1382384 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-852936 addons enable metrics-server
	
	I1027 23:29:23.896432 1382384 api_server.go:72] duration metric: took 6.149164962s to wait for apiserver process to appear ...
	I1027 23:29:23.896452 1382384 api_server.go:88] waiting for apiserver healthz status ...
	I1027 23:29:23.896472 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:23.905254 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 23:29:23.905324 1382384 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 23:29:23.915351 1382384 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 23:29:23.918229 1382384 addons.go:514] duration metric: took 6.170528043s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 23:29:24.396619 1382384 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 23:29:24.404992 1382384 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 23:29:24.406155 1382384 api_server.go:141] control plane version: v1.34.1
	I1027 23:29:24.406180 1382384 api_server.go:131] duration metric: took 509.720774ms to wait for apiserver health ...
	I1027 23:29:24.406189 1382384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 23:29:24.409864 1382384 system_pods.go:59] 8 kube-system pods found
	I1027 23:29:24.409906 1382384 system_pods.go:61] "coredns-66bc5c9577-jzn5z" [191e4eff-7490-4e8a-9231-7e634396b226] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.409916 1382384 system_pods.go:61] "etcd-newest-cni-852936" [4d42a25f-5e7b-4657-a6f1-d46bc06216dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 23:29:24.409949 1382384 system_pods.go:61] "kindnet-q6tfx" [b3f08f81-257b-4bba-9acc-4b3c88d70bb7] Running
	I1027 23:29:24.409959 1382384 system_pods.go:61] "kube-apiserver-newest-cni-852936" [090b241c-c08c-4306-b40c-871e5421048b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 23:29:24.409967 1382384 system_pods.go:61] "kube-controller-manager-newest-cni-852936" [5016a35c-4906-416f-981d-3d8eafafac9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 23:29:24.409976 1382384 system_pods.go:61] "kube-proxy-qcz7m" [8263ca0a-34e2-4388-82ba-1714b8940cba] Running
	I1027 23:29:24.409988 1382384 system_pods.go:61] "kube-scheduler-newest-cni-852936" [4f47dc44-57da-47eb-b115-12f3d5bac007] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 23:29:24.409994 1382384 system_pods.go:61] "storage-provisioner" [ebb4e6b7-17b5-43ab-b54c-34a6b5b2caa2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 23:29:24.410017 1382384 system_pods.go:74] duration metric: took 3.807388ms to wait for pod list to return data ...
	I1027 23:29:24.410063 1382384 default_sa.go:34] waiting for default service account to be created ...
	I1027 23:29:24.412702 1382384 default_sa.go:45] found service account: "default"
	I1027 23:29:24.412729 1382384 default_sa.go:55] duration metric: took 2.657145ms for default service account to be created ...
	I1027 23:29:24.412743 1382384 kubeadm.go:587] duration metric: took 6.665481562s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 23:29:24.412760 1382384 node_conditions.go:102] verifying NodePressure condition ...
	I1027 23:29:24.415832 1382384 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 23:29:24.415864 1382384 node_conditions.go:123] node cpu capacity is 2
	I1027 23:29:24.415877 1382384 node_conditions.go:105] duration metric: took 3.112233ms to run NodePressure ...
	I1027 23:29:24.415891 1382384 start.go:242] waiting for startup goroutines ...
	I1027 23:29:24.415931 1382384 start.go:247] waiting for cluster config update ...
	I1027 23:29:24.415944 1382384 start.go:256] writing updated cluster config ...
	I1027 23:29:24.416251 1382384 ssh_runner.go:195] Run: rm -f paused
	I1027 23:29:24.473504 1382384 start.go:626] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 23:29:24.476808 1382384 out.go:179] * Done! kubectl is now configured to use "newest-cni-852936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 23:29:12 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:12.123367234Z" level=info msg="Removed container 177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms/dashboard-metrics-scraper" id=eeb18849-9cd2-4ca8-beb2-e8f83da78f9c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 23:29:15 default-k8s-diff-port-336451 conmon[1146]: conmon d77a4209b5d8b6166e65 <ninfo>: container 1153 exited with status 1
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.122668022Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d22a7c97-575e-4758-9307-aad152b9e71a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.124231555Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ea6b815-4017-43d4-9df2-4f5faad09b00 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.125242572Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b8fc6fef-9950-4572-a1a9-c4ca627aa3cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.125376014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.137808065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.138000716Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/94737289a9e6465aef3b7b36bb5b0875e4a56380e6a03c8f9b60b7717aeaa326/merged/etc/passwd: no such file or directory"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.138036819Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/94737289a9e6465aef3b7b36bb5b0875e4a56380e6a03c8f9b60b7717aeaa326/merged/etc/group: no such file or directory"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.139167838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.175864812Z" level=info msg="Created container c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee: kube-system/storage-provisioner/storage-provisioner" id=b8fc6fef-9950-4572-a1a9-c4ca627aa3cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.178710407Z" level=info msg="Starting container: c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee" id=9b385a48-b9d9-4bb0-8151-64d5b8aef4c9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 23:29:16 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:16.18357669Z" level=info msg="Started container" PID=1642 containerID=c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee description=kube-system/storage-provisioner/storage-provisioner id=9b385a48-b9d9-4bb0-8151-64d5b8aef4c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1620ca91315a8c9bdbb1959a770b2c16bdc4578621da56c94324efe2073e52ef
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.462928518Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.46687986Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.467040543Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.46712733Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.470973808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.471167977Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.471242653Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.474432228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.47457632Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.474660276Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.477858303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 23:29:25 default-k8s-diff-port-336451 crio[650]: time="2025-10-27T23:29:25.477997423Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c63a21c878d68       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   1620ca91315a8       storage-provisioner                                    kube-system
	eaf10ad419dd1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   8c58ed4d8f432       dashboard-metrics-scraper-6ffb444bf9-m6dms             kubernetes-dashboard
	d9cc060395e7c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   155d0f05336af       kubernetes-dashboard-855c9754f9-9qnl7                  kubernetes-dashboard
	7fcdf13057a7d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   ece44cd15bf43       busybox                                                default
	fd096bbd312ce       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   9339e34ef2bfe       coredns-66bc5c9577-lzssb                               kube-system
	e286d3355f877       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   68b515cab5e27       kindnet-ht7mm                                          kube-system
	d77a4209b5d8b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   1620ca91315a8       storage-provisioner                                    kube-system
	31fd7339c9b68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   b126d22e46b90       kube-proxy-n4vzn                                       kube-system
	7f66ec5899883       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   75ca3ecbc6482       etcd-default-k8s-diff-port-336451                      kube-system
	e042d7ccfe395       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bb6bdcdeb02b4       kube-scheduler-default-k8s-diff-port-336451            kube-system
	69c1f90555bd0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   11d9d45a7f20e       kube-controller-manager-default-k8s-diff-port-336451   kube-system
	ee6b21c638763       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   2e8c85ce6acb7       kube-apiserver-default-k8s-diff-port-336451            kube-system
	
	
	==> coredns [fd096bbd312ce4ab42d6ec3b91f6f324ae5551679e881b224b3a5f4205916eee] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60260 - 9416 "HINFO IN 7204772191305620454.5806717990721506346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014237049s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-336451
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-336451
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7da329725eb7dc274e4db0e5490c73fe54de60f
	                    minikube.k8s.io/name=default-k8s-diff-port-336451
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T23_27_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 23:27:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-336451
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 23:29:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 23:29:13 +0000   Mon, 27 Oct 2025 23:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-336451
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b39d5467-41ea-430a-8620-2c79f46d3819
	  Boot ID:                    92ae6010-3357-40d5-99a5-768ec597200c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-lzssb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-336451                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-ht7mm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-336451             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-336451    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-n4vzn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-336451             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-m6dms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9qnl7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-336451 event: Registered Node default-k8s-diff-port-336451 in Controller
	  Normal   NodeReady                103s                   kubelet          Node default-k8s-diff-port-336451 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-336451 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-336451 event: Registered Node default-k8s-diff-port-336451 in Controller
	
	
	==> dmesg <==
	[Oct27 23:06] overlayfs: idmapped layers are currently not supported
	[  +3.129054] overlayfs: idmapped layers are currently not supported
	[Oct27 23:08] overlayfs: idmapped layers are currently not supported
	[Oct27 23:09] overlayfs: idmapped layers are currently not supported
	[  +0.696324] overlayfs: idmapped layers are currently not supported
	[ +42.065460] overlayfs: idmapped layers are currently not supported
	[Oct27 23:10] overlayfs: idmapped layers are currently not supported
	[ +23.722860] overlayfs: idmapped layers are currently not supported
	[Oct27 23:16] overlayfs: idmapped layers are currently not supported
	[Oct27 23:17] overlayfs: idmapped layers are currently not supported
	[Oct27 23:18] overlayfs: idmapped layers are currently not supported
	[Oct27 23:19] overlayfs: idmapped layers are currently not supported
	[Oct27 23:20] overlayfs: idmapped layers are currently not supported
	[Oct27 23:21] overlayfs: idmapped layers are currently not supported
	[Oct27 23:22] overlayfs: idmapped layers are currently not supported
	[ +34.590925] overlayfs: idmapped layers are currently not supported
	[Oct27 23:23] overlayfs: idmapped layers are currently not supported
	[  +6.906011] overlayfs: idmapped layers are currently not supported
	[Oct27 23:25] overlayfs: idmapped layers are currently not supported
	[  +2.284017] overlayfs: idmapped layers are currently not supported
	[Oct27 23:27] overlayfs: idmapped layers are currently not supported
	[  +6.661421] overlayfs: idmapped layers are currently not supported
	[Oct27 23:28] overlayfs: idmapped layers are currently not supported
	[ +11.644898] overlayfs: idmapped layers are currently not supported
	[Oct27 23:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f66ec5899883992c1749593bfd4630c3ce8244c7e186676fa13e99cb58e4a03] <==
	{"level":"warn","ts":"2025-10-27T23:28:39.851445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.887103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.915321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.951291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:39.988698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.006987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.030608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.053250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.095550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.150163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.157963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.208519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.251402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.311310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.339714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.368247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.397258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.443140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.468672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.490612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.523075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.549836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.608593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.610503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T23:28:40.728843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:29:40 up  6:12,  0 user,  load average: 4.70, 4.37, 3.60
	Linux default-k8s-diff-port-336451 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e286d3355f877874f1258955d812cbe73eef79f899dbe2144abe0c20b709727a] <==
	I1027 23:28:45.148456       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 23:28:45.148730       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 23:28:45.148874       1 main.go:148] setting mtu 1500 for CNI 
	I1027 23:28:45.148894       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 23:28:45.148910       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T23:28:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 23:28:45.486697       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 23:28:45.486727       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 23:28:45.486737       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 23:28:45.487253       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 23:29:15.460712       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 23:29:15.487412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 23:29:15.487594       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 23:29:15.487763       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 23:29:16.986977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 23:29:16.987114       1 metrics.go:72] Registering metrics
	I1027 23:29:16.987211       1 controller.go:711] "Syncing nftables rules"
	I1027 23:29:25.462527       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:29:25.462644       1 main.go:301] handling current node
	I1027 23:29:35.469082       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 23:29:35.469120       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ee6b21c638763f9bea06ed3eb613912563fe107d49320d174cfb911c51258b74] <==
	I1027 23:28:42.569055       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 23:28:42.569190       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 23:28:42.570070       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 23:28:42.827049       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 23:28:42.857016       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 23:28:42.857050       1 policy_source.go:240] refreshing policies
	I1027 23:28:42.857242       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 23:28:42.857299       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 23:28:42.878550       1 aggregator.go:171] initial CRD sync complete...
	I1027 23:28:42.878578       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 23:28:42.878585       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 23:28:42.878593       1 cache.go:39] Caches are synced for autoregister controller
	I1027 23:28:42.893606       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 23:28:42.982528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1027 23:28:43.031819       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 23:28:43.835892       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 23:28:44.214513       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 23:28:44.582248       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 23:28:44.860958       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 23:28:44.880013       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 23:28:45.451144       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.54.133"}
	I1027 23:28:45.520515       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.84.158"}
	I1027 23:28:47.489891       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 23:28:47.828089       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 23:28:47.927808       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [69c1f90555bd0a08896702d72889b7cbea6dc8f6bf3d24bcc9936a63461f070f] <==
	I1027 23:28:47.443225       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 23:28:47.443277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 23:28:47.450438       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 23:28:47.452806       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 23:28:47.457165       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 23:28:47.464805       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 23:28:47.465964       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 23:28:47.466037       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 23:28:47.467275       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:28:47.467294       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 23:28:47.467303       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 23:28:47.468649       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 23:28:47.468692       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 23:28:47.471135       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 23:28:47.471278       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 23:28:47.471687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 23:28:47.475524       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 23:28:47.475615       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 23:28:47.477987       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 23:28:47.490149       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 23:28:47.490202       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 23:28:47.499364       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 23:28:47.511622       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 23:28:47.515853       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 23:28:47.526069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [31fd7339c9b6866e0f75aa299a3f5f421e9b2e21a2e13ea31cc69466a502ee2c] <==
	I1027 23:28:45.666272       1 server_linux.go:53] "Using iptables proxy"
	I1027 23:28:45.763997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 23:28:45.877872       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 23:28:45.878114       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 23:28:45.878286       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 23:28:45.961858       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 23:28:45.961973       1 server_linux.go:132] "Using iptables Proxier"
	I1027 23:28:45.966103       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 23:28:45.966613       1 server.go:527] "Version info" version="v1.34.1"
	I1027 23:28:45.966812       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:28:45.968200       1 config.go:200] "Starting service config controller"
	I1027 23:28:45.968390       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 23:28:45.968444       1 config.go:106] "Starting endpoint slice config controller"
	I1027 23:28:45.968488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 23:28:45.968525       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 23:28:45.968553       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 23:28:45.969181       1 config.go:309] "Starting node config controller"
	I1027 23:28:45.971867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 23:28:45.971957       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 23:28:46.071116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 23:28:46.072216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 23:28:46.072340       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e042d7ccfe395ac64bbfa1b1099e7ff453e4d67df7754503aac635f0f8ba71a8] <==
	I1027 23:28:40.605174       1 serving.go:386] Generated self-signed cert in-memory
	I1027 23:28:45.010333       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 23:28:45.010375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 23:28:45.073248       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 23:28:45.073311       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 23:28:45.073378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:28:45.073390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 23:28:45.073404       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:28:45.073411       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:28:45.073768       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 23:28:45.073872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 23:28:45.187570       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 23:28:45.187644       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 23:28:45.187740       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 23:28:48 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:48.298344     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b7431c94-0d43-4b74-900a-1d361016710a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9qnl7\" (UID: \"b7431c94-0d43-4b74-900a-1d361016710a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9qnl7"
	Oct 27 23:28:48 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:48.298415     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhzrl\" (UniqueName: \"kubernetes.io/projected/b7431c94-0d43-4b74-900a-1d361016710a-kube-api-access-mhzrl\") pod \"kubernetes-dashboard-855c9754f9-9qnl7\" (UID: \"b7431c94-0d43-4b74-900a-1d361016710a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9qnl7"
	Oct 27 23:28:48 default-k8s-diff-port-336451 kubelet[777]: W1027 23:28:48.481244     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8835f98b0ace2260229b60a7faffa2e89b8adae73752ad3fe2d4d4baea93bf59/crio-155d0f05336af5592b0a628082022e28e43783921de7e5d820531515052e42d1 WatchSource:0}: Error finding container 155d0f05336af5592b0a628082022e28e43783921de7e5d820531515052e42d1: Status 404 returned error can't find the container with id 155d0f05336af5592b0a628082022e28e43783921de7e5d820531515052e42d1
	Oct 27 23:28:51 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:51.585008     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 23:28:56 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:56.053440     777 scope.go:117] "RemoveContainer" containerID="26d0ab726831431d5e33718f92fc0965c0102605fffeadf80af47fd90644d24d"
	Oct 27 23:28:57 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:57.062452     777 scope.go:117] "RemoveContainer" containerID="26d0ab726831431d5e33718f92fc0965c0102605fffeadf80af47fd90644d24d"
	Oct 27 23:28:57 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:57.062979     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:28:57 default-k8s-diff-port-336451 kubelet[777]: E1027 23:28:57.063423     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:28:58 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:58.066663     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:28:58 default-k8s-diff-port-336451 kubelet[777]: E1027 23:28:58.066812     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:28:59 default-k8s-diff-port-336451 kubelet[777]: I1027 23:28:59.068989     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:28:59 default-k8s-diff-port-336451 kubelet[777]: E1027 23:28:59.069150     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:11 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:11.688557     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:12.108762     777 scope.go:117] "RemoveContainer" containerID="177bb2576d6d6e598b497b6a66958a8cf28e9e66365b4f64584f9a08c07fe9f2"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:12.109093     777 scope.go:117] "RemoveContainer" containerID="eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: E1027 23:29:12.109355     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:12 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:12.131272     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9qnl7" podStartSLOduration=11.306867992 podStartE2EDuration="24.131255344s" podCreationTimestamp="2025-10-27 23:28:48 +0000 UTC" firstStartedPulling="2025-10-27 23:28:48.518838978 +0000 UTC m=+14.011908252" lastFinishedPulling="2025-10-27 23:29:01.34322633 +0000 UTC m=+26.836295604" observedRunningTime="2025-10-27 23:29:02.104389919 +0000 UTC m=+27.597459193" watchObservedRunningTime="2025-10-27 23:29:12.131255344 +0000 UTC m=+37.624324626"
	Oct 27 23:29:16 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:16.122116     777 scope.go:117] "RemoveContainer" containerID="d77a4209b5d8b6166e65f50776e9be005d032b980c041b2b25fb2f68396863f1"
	Oct 27 23:29:18 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:18.402828     777 scope.go:117] "RemoveContainer" containerID="eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	Oct 27 23:29:18 default-k8s-diff-port-336451 kubelet[777]: E1027 23:29:18.403002     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:29 default-k8s-diff-port-336451 kubelet[777]: I1027 23:29:29.689456     777 scope.go:117] "RemoveContainer" containerID="eaf10ad419dd1638041c2c094f64e06cb64c2fac32344129da5e4dbe35087490"
	Oct 27 23:29:29 default-k8s-diff-port-336451 kubelet[777]: E1027 23:29:29.690131     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-m6dms_kubernetes-dashboard(f8f98b14-af0d-4d78-929d-b0d1f014939b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-m6dms" podUID="f8f98b14-af0d-4d78-929d-b0d1f014939b"
	Oct 27 23:29:35 default-k8s-diff-port-336451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 23:29:35 default-k8s-diff-port-336451 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 23:29:35 default-k8s-diff-port-336451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d9cc060395e7c461eef94cb5f9bb56799fcbc841f9f373397f342e2d95f6b958] <==
	2025/10/27 23:29:01 Using namespace: kubernetes-dashboard
	2025/10/27 23:29:01 Using in-cluster config to connect to apiserver
	2025/10/27 23:29:01 Using secret token for csrf signing
	2025/10/27 23:29:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 23:29:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 23:29:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 23:29:01 Generating JWE encryption key
	2025/10/27 23:29:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 23:29:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 23:29:02 Initializing JWE encryption key from synchronized object
	2025/10/27 23:29:02 Creating in-cluster Sidecar client
	2025/10/27 23:29:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:29:02 Serving insecurely on HTTP port: 9090
	2025/10/27 23:29:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 23:29:01 Starting overwatch
	
	
	==> storage-provisioner [c63a21c878d688b09782a9d01e91abf9249e4e4f9b61c603169d9ee05fb2d2ee] <==
	I1027 23:29:16.208369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 23:29:16.233527       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 23:29:16.233749       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 23:29:16.237275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:19.698752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:23.959506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:27.558522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:30.612227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:33.635125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:33.646540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:29:33.646832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 23:29:33.647127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336451_426f74d4-a3b4-4edf-aacf-2b514f271032!
	I1027 23:29:33.648604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2176cbc4-0409-4665-84bd-c2de79a00ad7", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-336451_426f74d4-a3b4-4edf-aacf-2b514f271032 became leader
	W1027 23:29:33.661421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:33.665156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 23:29:33.747619       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-336451_426f74d4-a3b4-4edf-aacf-2b514f271032!
	W1027 23:29:35.667960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:35.673580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:37.677039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:37.682004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:39.685569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 23:29:39.692673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d77a4209b5d8b6166e65f50776e9be005d032b980c041b2b25fb2f68396863f1] <==
	I1027 23:28:45.149792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 23:29:15.176827       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451: exit status 2 (384.30318ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-336451 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.30s)

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.92
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 9.8
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 166.53
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.74
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 37.69
50 TestCertExpiration 254.58
52 TestForceSystemdFlag 48.07
53 TestForceSystemdEnv 41.71
58 TestErrorSpam/setup 33.53
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.04
61 TestErrorSpam/pause 5.71
62 TestErrorSpam/unpause 6.07
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.65
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 24.2
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.58
75 TestFunctional/serial/CacheCmd/cache/add_local 1.15
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 32.24
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.5
87 TestFunctional/serial/InvalidService 4.1
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 9.99
91 TestFunctional/parallel/DryRun 0.62
92 TestFunctional/parallel/InternationalLanguage 0.24
93 TestFunctional/parallel/StatusCmd 1.31
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 26.07
101 TestFunctional/parallel/SSHCmd 0.73
102 TestFunctional/parallel/CpCmd 2.29
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.26
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
113 TestFunctional/parallel/License 0.35
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.41
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 8.11
130 TestFunctional/parallel/MountCmd/specific-port 2.13
131 TestFunctional/parallel/MountCmd/VerifyCleanup 2.27
132 TestFunctional/parallel/ServiceCmd/List 0.59
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 0.98
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.25
144 TestFunctional/parallel/ImageCommands/Setup 0.68
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 192.29
163 TestMultiControlPlane/serial/DeployApp 6.55
164 TestMultiControlPlane/serial/PingHostFromPods 1.46
165 TestMultiControlPlane/serial/AddWorkerNode 60.97
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.27
169 TestMultiControlPlane/serial/StopSecondaryNode 13.01
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.97
171 TestMultiControlPlane/serial/RestartSecondaryNode 23.43
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.16
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.5
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.83
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.01
177 TestMultiControlPlane/serial/RestartCluster 75.71
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 82.17
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 79.89
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 46.07
211 TestKicCustomNetwork/use_default_bridge_network 38.72
212 TestKicExistingNetwork 38.44
213 TestKicCustomSubnet 35.83
214 TestKicStaticIP 40.18
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.45
219 TestMountStart/serial/StartWithMountFirst 9.12
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 7.23
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.88
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.59
227 TestMountStart/serial/VerifyMountPostStop 0.29
230 TestMultiNode/serial/FreshStart2Nodes 136.81
231 TestMultiNode/serial/DeployApp2Nodes 6.23
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 58.67
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.48
237 TestMultiNode/serial/StopNode 2.47
238 TestMultiNode/serial/StartAfterStop 8.14
239 TestMultiNode/serial/RestartKeepsNodes 79.61
240 TestMultiNode/serial/DeleteNode 6.02
241 TestMultiNode/serial/StopMultiNode 24.07
242 TestMultiNode/serial/RestartMultiNode 48.96
243 TestMultiNode/serial/ValidateNameConflict 39.9
248 TestPreload 133.92
250 TestScheduledStopUnix 109.46
253 TestInsufficientStorage 13.61
254 TestRunningBinaryUpgrade 61.43
256 TestKubernetesUpgrade 356.8
257 TestMissingContainerUpgrade 106.72
259 TestPause/serial/Start 93.27
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
262 TestNoKubernetes/serial/StartWithK8s 42.39
263 TestNoKubernetes/serial/StartWithStopK8s 20.76
264 TestNoKubernetes/serial/Start 8.82
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
266 TestNoKubernetes/serial/ProfileList 1.17
267 TestNoKubernetes/serial/Stop 1.3
268 TestNoKubernetes/serial/StartNoArgs 6.97
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
277 TestNetworkPlugins/group/false 3.96
281 TestPause/serial/SecondStartNoReconfiguration 33.05
283 TestStoppedBinaryUpgrade/Setup 1.32
284 TestStoppedBinaryUpgrade/Upgrade 61.89
292 TestNetworkPlugins/group/auto/Start 89.01
293 TestStoppedBinaryUpgrade/MinikubeLogs 1.79
294 TestNetworkPlugins/group/kindnet/Start 79.24
295 TestNetworkPlugins/group/auto/KubeletFlags 0.31
296 TestNetworkPlugins/group/auto/NetCatPod 9.38
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/auto/Localhost 0.14
299 TestNetworkPlugins/group/auto/HairPin 0.14
300 TestNetworkPlugins/group/calico/Start 68.05
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
303 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
304 TestNetworkPlugins/group/kindnet/DNS 0.22
305 TestNetworkPlugins/group/kindnet/Localhost 0.25
306 TestNetworkPlugins/group/kindnet/HairPin 0.2
307 TestNetworkPlugins/group/custom-flannel/Start 65.26
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.43
310 TestNetworkPlugins/group/calico/NetCatPod 11.38
311 TestNetworkPlugins/group/calico/DNS 0.19
312 TestNetworkPlugins/group/calico/Localhost 0.13
313 TestNetworkPlugins/group/calico/HairPin 0.17
314 TestNetworkPlugins/group/enable-default-cni/Start 75.72
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.8
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.65
317 TestNetworkPlugins/group/custom-flannel/DNS 0.19
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
320 TestNetworkPlugins/group/flannel/Start 56.8
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.4
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
328 TestNetworkPlugins/group/flannel/NetCatPod 11.24
329 TestNetworkPlugins/group/bridge/Start 81.74
330 TestNetworkPlugins/group/flannel/DNS 0.14
331 TestNetworkPlugins/group/flannel/Localhost 0.13
332 TestNetworkPlugins/group/flannel/HairPin 0.14
334 TestStartStop/group/old-k8s-version/serial/FirstStart 65.09
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
336 TestNetworkPlugins/group/bridge/NetCatPod 13.44
337 TestNetworkPlugins/group/bridge/DNS 0.16
338 TestNetworkPlugins/group/bridge/Localhost 0.19
339 TestNetworkPlugins/group/bridge/HairPin 0.13
340 TestStartStop/group/old-k8s-version/serial/DeployApp 9.46
342 TestStartStop/group/old-k8s-version/serial/Stop 12.23
344 TestStartStop/group/no-preload/serial/FirstStart 74.35
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
346 TestStartStop/group/old-k8s-version/serial/SecondStart 52.24
347 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
348 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
349 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
351 TestStartStop/group/no-preload/serial/DeployApp 8.43
353 TestStartStop/group/embed-certs/serial/FirstStart 89.14
355 TestStartStop/group/no-preload/serial/Stop 12.3
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
357 TestStartStop/group/no-preload/serial/SecondStart 55.19
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
360 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
362 TestStartStop/group/embed-certs/serial/DeployApp 9.45
364 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.65
366 TestStartStop/group/embed-certs/serial/Stop 12.33
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
368 TestStartStop/group/embed-certs/serial/SecondStart 61.63
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.42
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.33
377 TestStartStop/group/newest-cni/serial/FirstStart 44.3
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.28
380 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/Stop 2.2
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
384 TestStartStop/group/newest-cni/serial/SecondStart 15.22
385 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (8.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-007224 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-007224 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.918636682s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 22:16:18.328047 1134735 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1027 22:16:18.328123 1134735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-007224
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-007224: exit status 85 (93.842189ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-007224 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-007224 │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:16:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:16:09.459328 1134740 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:16:09.459551 1134740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:09.459581 1134740 out.go:374] Setting ErrFile to fd 2...
	I1027 22:16:09.459601 1134740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:09.459937 1134740 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	W1027 22:16:09.460105 1134740 root.go:316] Error reading config file at /home/jenkins/minikube-integration/21790-1132878/.minikube/config/config.json: open /home/jenkins/minikube-integration/21790-1132878/.minikube/config/config.json: no such file or directory
	I1027 22:16:09.460571 1134740 out.go:368] Setting JSON to true
	I1027 22:16:09.461473 1134740 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17919,"bootTime":1761585451,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 22:16:09.461573 1134740 start.go:143] virtualization:  
	I1027 22:16:09.465939 1134740 out.go:99] [download-only-007224] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1027 22:16:09.466150 1134740 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball: no such file or directory
	I1027 22:16:09.466215 1134740 notify.go:221] Checking for updates...
	I1027 22:16:09.469221 1134740 out.go:171] MINIKUBE_LOCATION=21790
	I1027 22:16:09.472473 1134740 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:16:09.475465 1134740 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:16:09.478326 1134740 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 22:16:09.481224 1134740 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1027 22:16:09.486894 1134740 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 22:16:09.487249 1134740 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:16:09.516001 1134740 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:16:09.516115 1134740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:09.573609 1134740 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-27 22:16:09.564335733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:09.573714 1134740 docker.go:318] overlay module found
	I1027 22:16:09.576689 1134740 out.go:99] Using the docker driver based on user configuration
	I1027 22:16:09.576731 1134740 start.go:307] selected driver: docker
	I1027 22:16:09.576739 1134740 start.go:928] validating driver "docker" against <nil>
	I1027 22:16:09.576854 1134740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:09.636568 1134740 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-27 22:16:09.627751606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:09.636728 1134740 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:16:09.637026 1134740 start_flags.go:409] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1027 22:16:09.637183 1134740 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:16:09.640380 1134740 out.go:171] Using Docker driver with root privileges
	I1027 22:16:09.643301 1134740 cni.go:84] Creating CNI manager for ""
	I1027 22:16:09.643375 1134740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:16:09.643390 1134740 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:16:09.643471 1134740 start.go:351] cluster config:
	{Name:download-only-007224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-007224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:16:09.646334 1134740 out.go:99] Starting "download-only-007224" primary control-plane node in "download-only-007224" cluster
	I1027 22:16:09.646352 1134740 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:16:09.649350 1134740 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:16:09.649412 1134740 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 22:16:09.649515 1134740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:16:09.665562 1134740 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:16:09.665757 1134740 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 22:16:09.665865 1134740 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:16:09.708421 1134740 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1027 22:16:09.708451 1134740 cache.go:59] Caching tarball of preloaded images
	I1027 22:16:09.709337 1134740 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 22:16:09.712676 1134740 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 22:16:09.712700 1134740 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1027 22:16:09.810069 1134740 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1027 22:16:09.810199 1134740 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-007224 host does not exist
	  To start a cluster, run: "minikube start -p download-only-007224"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-007224
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (9.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-798916 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-798916 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.796164215s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (9.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 22:16:28.583300 1134735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1027 22:16:28.583341 1134735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-798916
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-798916: exit status 85 (98.290185ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-007224 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-007224 │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ delete  │ -p download-only-007224                                                                                                                                                   │ download-only-007224 │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │ 27 Oct 25 22:16 UTC │
	│ start   │ -o=json --download-only -p download-only-798916 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-798916 │ jenkins │ v1.37.0 │ 27 Oct 25 22:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 22:16:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 22:16:18.831029 1134940 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:16:18.831175 1134940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:18.831202 1134940 out.go:374] Setting ErrFile to fd 2...
	I1027 22:16:18.831219 1134940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:16:18.831504 1134940 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:16:18.831949 1134940 out.go:368] Setting JSON to true
	I1027 22:16:18.832832 1134940 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17928,"bootTime":1761585451,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 22:16:18.832901 1134940 start.go:143] virtualization:  
	I1027 22:16:18.836349 1134940 out.go:99] [download-only-798916] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:16:18.836658 1134940 notify.go:221] Checking for updates...
	I1027 22:16:18.840612 1134940 out.go:171] MINIKUBE_LOCATION=21790
	I1027 22:16:18.843597 1134940 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:16:18.846605 1134940 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:16:18.850179 1134940 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 22:16:18.853104 1134940 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1027 22:16:18.858713 1134940 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 22:16:18.859065 1134940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:16:18.889784 1134940 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:16:18.889908 1134940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:18.949041 1134940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-27 22:16:18.938878445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:18.949153 1134940 docker.go:318] overlay module found
	I1027 22:16:18.952223 1134940 out.go:99] Using the docker driver based on user configuration
	I1027 22:16:18.952262 1134940 start.go:307] selected driver: docker
	I1027 22:16:18.952270 1134940 start.go:928] validating driver "docker" against <nil>
	I1027 22:16:18.952403 1134940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:16:19.004409 1134940 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-27 22:16:18.995799583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:16:19.004594 1134940 start_flags.go:326] no existing cluster config was found, will generate one from the flags 
	I1027 22:16:19.004901 1134940 start_flags.go:409] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1027 22:16:19.005071 1134940 start_flags.go:973] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 22:16:19.008388 1134940 out.go:171] Using Docker driver with root privileges
	I1027 22:16:19.011458 1134940 cni.go:84] Creating CNI manager for ""
	I1027 22:16:19.011539 1134940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 22:16:19.011554 1134940 start_flags.go:335] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 22:16:19.011638 1134940 start.go:351] cluster config:
	{Name:download-only-798916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-798916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:16:19.014668 1134940 out.go:99] Starting "download-only-798916" primary control-plane node in "download-only-798916" cluster
	I1027 22:16:19.014687 1134940 cache.go:124] Beginning downloading kic base image for docker with crio
	I1027 22:16:19.017540 1134940 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 22:16:19.017568 1134940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:16:19.017689 1134940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 22:16:19.033816 1134940 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 22:16:19.033946 1134940 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 22:16:19.033965 1134940 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 22:16:19.033970 1134940 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 22:16:19.033977 1134940 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 22:16:19.072603 1134940 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 22:16:19.072640 1134940 cache.go:59] Caching tarball of preloaded images
	I1027 22:16:19.072820 1134940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 22:16:19.075911 1134940 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1027 22:16:19.075931 1134940 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1027 22:16:19.159996 1134940 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1027 22:16:19.160058 1134940 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21790-1132878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-798916 host does not exist
	  To start a cluster, run: "minikube start -p download-only-798916"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-798916
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 22:16:29.761067 1134735 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-961152 --alsologtostderr --binary-mirror http://127.0.0.1:35369 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-961152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-961152
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-789752
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-789752: exit status 85 (72.661885ms)

                                                
                                                
-- stdout --
	* Profile "addons-789752" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-789752"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-789752
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-789752: exit status 85 (81.332461ms)

                                                
                                                
-- stdout --
	* Profile "addons-789752" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-789752"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (166.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-789752 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-789752 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m46.529904622s)
--- PASS: TestAddons/Setup (166.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-789752 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-789752 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.74s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-789752 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-789752 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [acbf609f-2010-4514-8ae8-71a4efdc0c5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [acbf609f-2010-4514-8ae8-71a4efdc0c5c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004110838s
addons_test.go:694: (dbg) Run:  kubectl --context addons-789752 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-789752 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-789752 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-789752 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-789752
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-789752: (12.123425968s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-789752
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-789752
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-789752
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (37.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-976513 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-976513 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.773527636s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-976513 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-976513 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-976513 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-976513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-976513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-976513: (2.133838062s)
--- PASS: TestCertOptions (37.69s)

                                                
                                    
x
+
TestCertExpiration (254.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-635247 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1027 23:09:18.107055 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-635247 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (45.568917344s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-635247 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-635247 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.449579013s)
helpers_test.go:175: Cleaning up "cert-expiration-635247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-635247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-635247: (2.56021667s)
--- PASS: TestCertExpiration (254.58s)

                                                
                                    
x
+
TestForceSystemdFlag (48.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-180041 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-180041 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.955297867s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-180041 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-180041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-180041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-180041: (2.668763358s)
--- PASS: TestForceSystemdFlag (48.07s)

                                                
                                    
x
+
TestForceSystemdEnv (41.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-179399 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-179399 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.011574823s)
helpers_test.go:175: Cleaning up "force-systemd-env-179399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-179399
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-179399: (2.696594005s)
--- PASS: TestForceSystemdEnv (41.71s)

                                                
                                    
x
+
TestErrorSpam/setup (33.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-405970 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-405970 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-405970 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-405970 --driver=docker  --container-runtime=crio: (33.528296344s)
--- PASS: TestErrorSpam/setup (33.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (5.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause: exit status 80 (1.86864497s)

                                                
                                                
-- stdout --
	* Pausing node nospam-405970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:23:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause: exit status 80 (2.012236298s)

                                                
                                                
-- stdout --
	* Pausing node nospam-405970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:23:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause: exit status 80 (1.830904803s)

                                                
                                                
-- stdout --
	* Pausing node nospam-405970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:23:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.07s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause: exit status 80 (2.180743462s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-405970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:23:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause: exit status 80 (1.800274142s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-405970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:23:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause: exit status 80 (2.08577005s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-405970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T22:23:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.07s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 stop: (1.305575248s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-405970 --log_dir /tmp/nospam-405970 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21790-1132878/.minikube/files/etc/test/nested/copy/1134735/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812436 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1027 22:24:18.103855 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:18.110209 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:18.121745 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:18.143138 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:18.184578 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:18.266080 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:18.427577 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:18.749037 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:19.390824 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:20.672208 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:23.233686 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:28.355164 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:38.597077 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:24:59.079226 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-812436 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.647580022s)
--- PASS: TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (24.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 22:25:01.025795 1134735 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812436 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-812436 --alsologtostderr -v=8: (24.197109011s)
functional_test.go:678: soft start took 24.197642404s for "functional-812436" cluster.
I1027 22:25:25.230276 1134735 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (24.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-812436 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 cache add registry.k8s.io/pause:3.1: (1.21727169s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 cache add registry.k8s.io/pause:3.3: (1.278207803s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 cache add registry.k8s.io/pause:latest: (1.084743587s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-812436 /tmp/TestFunctionalserialCacheCmdcacheadd_local975045356/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cache add minikube-local-cache-test:functional-812436
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cache delete minikube-local-cache-test:functional-812436
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-812436
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.188452ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 kubectl -- --context functional-812436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-812436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1027 22:25:40.041953 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-812436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.244208243s)
functional_test.go:776: restart took 32.24432483s for "functional-812436" cluster.
I1027 22:26:05.005014 1134735 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-812436 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 logs: (1.452547022s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 logs --file /tmp/TestFunctionalserialLogsFileCmd2975128979/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 logs --file /tmp/TestFunctionalserialLogsFileCmd2975128979/001/logs.txt: (1.502443978s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-812436 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-812436
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-812436: exit status 115 (382.197148ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32431 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-812436 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 config get cpus: exit status 14 (94.338456ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 config get cpus: exit status 14 (86.258813ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-812436 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-812436 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1161327: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-812436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (260.75304ms)

                                                
                                                
-- stdout --
	* [functional-812436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:36:42.874955 1160762 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:36:42.875142 1160762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:42.875173 1160762 out.go:374] Setting ErrFile to fd 2...
	I1027 22:36:42.875192 1160762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:42.875451 1160762 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:36:42.875886 1160762 out.go:368] Setting JSON to false
	I1027 22:36:42.876793 1160762 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19152,"bootTime":1761585451,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 22:36:42.876889 1160762 start.go:143] virtualization:  
	I1027 22:36:42.882435 1160762 out.go:179] * [functional-812436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 22:36:42.886094 1160762 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:36:42.886197 1160762 notify.go:221] Checking for updates...
	I1027 22:36:42.891988 1160762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:36:42.894776 1160762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:36:42.897675 1160762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 22:36:42.900514 1160762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:36:42.903460 1160762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:36:42.907038 1160762 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:36:42.907722 1160762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:36:42.942955 1160762 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:36:42.943052 1160762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:36:43.038859 1160762 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 22:36:43.027974283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:36:43.039021 1160762 docker.go:318] overlay module found
	I1027 22:36:43.042277 1160762 out.go:179] * Using the docker driver based on existing profile
	I1027 22:36:43.045145 1160762 start.go:307] selected driver: docker
	I1027 22:36:43.045171 1160762 start.go:928] validating driver "docker" against &{Name:functional-812436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-812436 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:36:43.045274 1160762 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:36:43.052595 1160762 out.go:203] 
	W1027 22:36:43.055564 1160762 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 22:36:43.058760 1160762 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812436 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-812436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (243.409004ms)

                                                
                                                
-- stdout --
	* [functional-812436] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:36:42.620729 1160675 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:36:42.621052 1160675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:42.621066 1160675 out.go:374] Setting ErrFile to fd 2...
	I1027 22:36:42.621072 1160675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:36:42.621455 1160675 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:36:42.621867 1160675 out.go:368] Setting JSON to false
	I1027 22:36:42.622900 1160675 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19152,"bootTime":1761585451,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 22:36:42.622974 1160675 start.go:143] virtualization:  
	I1027 22:36:42.626461 1160675 out.go:179] * [functional-812436] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1027 22:36:42.629459 1160675 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 22:36:42.629518 1160675 notify.go:221] Checking for updates...
	I1027 22:36:42.635516 1160675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 22:36:42.638469 1160675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 22:36:42.641337 1160675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 22:36:42.644196 1160675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 22:36:42.647071 1160675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 22:36:42.650319 1160675 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:36:42.651093 1160675 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 22:36:42.684705 1160675 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 22:36:42.684813 1160675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:36:42.782615 1160675 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 22:36:42.773071033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:36:42.782716 1160675 docker.go:318] overlay module found
	I1027 22:36:42.785855 1160675 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1027 22:36:42.788936 1160675 start.go:307] selected driver: docker
	I1027 22:36:42.788957 1160675 start.go:928] validating driver "docker" against &{Name:functional-812436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-812436 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 22:36:42.789049 1160675 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 22:36:42.792653 1160675 out.go:203] 
	W1027 22:36:42.795516 1160675 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 22:36:42.798585 1160675 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1ceb2728-25f7-4317-81bd-4d43cdbeecad] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003480024s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-812436 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-812436 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-812436 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-812436 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fe785ddd-c6e5-48a5-93ea-6ba9db8822a5] Pending
helpers_test.go:352: "sp-pod" [fe785ddd-c6e5-48a5-93ea-6ba9db8822a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [fe785ddd-c6e5-48a5-93ea-6ba9db8822a5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003369716s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-812436 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-812436 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-812436 delete -f testdata/storage-provisioner/pod.yaml: (1.130319176s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-812436 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fbfca064-4ea8-4403-a075-8eeed467e7e6] Pending
helpers_test.go:352: "sp-pod" [fbfca064-4ea8-4403-a075-8eeed467e7e6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003213315s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-812436 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh -n functional-812436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cp functional-812436:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd457839162/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh -n functional-812436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh -n functional-812436 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1134735/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo cat /etc/test/nested/copy/1134735/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1134735.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo cat /etc/ssl/certs/1134735.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1134735.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo cat /usr/share/ca-certificates/1134735.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11347352.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo cat /etc/ssl/certs/11347352.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11347352.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo cat /usr/share/ca-certificates/11347352.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-812436 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh "sudo systemctl is-active docker": exit status 1 (329.631303ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh "sudo systemctl is-active containerd": exit status 1 (342.269102ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812436 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812436 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-812436 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-812436 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1156937: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812436 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-812436 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b5125871-56eb-42df-b227-e1348904370f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b5125871-56eb-42df-b227-e1348904370f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00322138s
I1027 22:26:23.477457 1134735 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-812436 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.15.240 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-812436 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "360.659831ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.47001ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "359.0548ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.800568ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdany-port1781832229/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761604588746778820" to /tmp/TestFunctionalparallelMountCmdany-port1781832229/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761604588746778820" to /tmp/TestFunctionalparallelMountCmdany-port1781832229/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761604588746778820" to /tmp/TestFunctionalparallelMountCmdany-port1781832229/001/test-1761604588746778820
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.991402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:36:29.105055 1134735 retry.go:31] will retry after 615.64456ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 27 22:36 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 27 22:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 27 22:36 test-1761604588746778820
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh cat /mount-9p/test-1761604588746778820
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-812436 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3b247877-62e0-4999-81da-c00bcf3c317c] Pending
helpers_test.go:352: "busybox-mount" [3b247877-62e0-4999-81da-c00bcf3c317c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3b247877-62e0-4999-81da-c00bcf3c317c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3b247877-62e0-4999-81da-c00bcf3c317c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003393035s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-812436 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdany-port1781832229/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdspecific-port1769920795/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.885775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:36:37.206588 1134735 retry.go:31] will retry after 716.372518ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdspecific-port1769920795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh "sudo umount -f /mount-9p": exit status 1 (280.112209ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-812436 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdspecific-port1769920795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4136627776/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4136627776/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4136627776/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T" /mount1: exit status 1 (564.056431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 22:36:39.545400 1134735 retry.go:31] will retry after 689.519425ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-812436 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4136627776/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4136627776/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4136627776/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 service list -o json
functional_test.go:1504: Took "663.604454ms" to run "out/minikube-linux-arm64 -p functional-812436 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812436 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812436 image ls --format short --alsologtostderr:
I1027 22:36:57.081536 1163230 out.go:360] Setting OutFile to fd 1 ...
I1027 22:36:57.083318 1163230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.083346 1163230 out.go:374] Setting ErrFile to fd 2...
I1027 22:36:57.083354 1163230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.083641 1163230 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
I1027 22:36:57.084326 1163230 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.084446 1163230 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.084897 1163230 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
I1027 22:36:57.110260 1163230 ssh_runner.go:195] Run: systemctl --version
I1027 22:36:57.110320 1163230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
I1027 22:36:57.143832 1163230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
I1027 22:36:57.249167 1163230 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812436 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ docker.io/library/nginx                 │ latest             │ e612b97116b41 │ 176MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812436 image ls --format table --alsologtostderr:
I1027 22:36:57.365053 1163301 out.go:360] Setting OutFile to fd 1 ...
I1027 22:36:57.365267 1163301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.365292 1163301 out.go:374] Setting ErrFile to fd 2...
I1027 22:36:57.365311 1163301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.365656 1163301 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
I1027 22:36:57.366308 1163301 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.366483 1163301 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.366977 1163301 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
I1027 22:36:57.397755 1163301 ssh_runner.go:195] Run: systemctl --version
I1027 22:36:57.397805 1163301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
I1027 22:36:57.434105 1163301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
I1027 22:36:57.547062 1163301 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812436 image ls --format json --alsologtostderr:
[{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88
f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["doc
ker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176071022"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a422e0e982356f6c
1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-schedu
ler:v1.34.1"],"size":"51592017"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":[
"registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812436 image ls --format json --alsologtostderr:
I1027 22:36:57.142267 1163241 out.go:360] Setting OutFile to fd 1 ...
I1027 22:36:57.142483 1163241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.142512 1163241 out.go:374] Setting ErrFile to fd 2...
I1027 22:36:57.142533 1163241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.142927 1163241 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
I1027 22:36:57.144797 1163241 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.145077 1163241 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.145955 1163241 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
I1027 22:36:57.170612 1163241 ssh_runner.go:195] Run: systemctl --version
I1027 22:36:57.170664 1163241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
I1027 22:36:57.190271 1163241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
I1027 22:36:57.297887 1163241 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812436 image ls --format yaml --alsologtostderr:
- id: e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f
repoTags:
- docker.io/library/nginx:latest
size: "176071022"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812436 image ls --format yaml --alsologtostderr:
I1027 22:36:56.771493 1163153 out.go:360] Setting OutFile to fd 1 ...
I1027 22:36:56.771670 1163153 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:56.771680 1163153 out.go:374] Setting ErrFile to fd 2...
I1027 22:36:56.771685 1163153 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:56.771951 1163153 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
I1027 22:36:56.772559 1163153 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:56.772674 1163153 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:56.773117 1163153 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
I1027 22:36:56.792016 1163153 ssh_runner.go:195] Run: systemctl --version
I1027 22:36:56.792072 1163153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
I1027 22:36:56.816181 1163153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
I1027 22:36:56.951436 1163153 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812436 ssh pgrep buildkitd: exit status 1 (339.728586ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image build -t localhost/my-image:functional-812436 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-812436 image build -t localhost/my-image:functional-812436 testdata/build --alsologtostderr: (3.676314849s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812436 image build -t localhost/my-image:functional-812436 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c14f5c2147f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-812436
--> 614fa38a408
Successfully tagged localhost/my-image:functional-812436
614fa38a4084f48cb48d82d90f4b6346d24a40370029e279c061ba5105b8bc1d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812436 image build -t localhost/my-image:functional-812436 testdata/build --alsologtostderr:
I1027 22:36:57.741487 1163409 out.go:360] Setting OutFile to fd 1 ...
I1027 22:36:57.742250 1163409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.742258 1163409 out.go:374] Setting ErrFile to fd 2...
I1027 22:36:57.742262 1163409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 22:36:57.742591 1163409 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
I1027 22:36:57.743216 1163409 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.743724 1163409 config.go:182] Loaded profile config "functional-812436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 22:36:57.744266 1163409 cli_runner.go:164] Run: docker container inspect functional-812436 --format={{.State.Status}}
I1027 22:36:57.768270 1163409 ssh_runner.go:195] Run: systemctl --version
I1027 22:36:57.768326 1163409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812436
I1027 22:36:57.787406 1163409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34254 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/functional-812436/id_rsa Username:docker}
I1027 22:36:57.893285 1163409 build_images.go:162] Building image from path: /tmp/build.1535684014.tar
I1027 22:36:57.893361 1163409 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 22:36:57.902789 1163409 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1535684014.tar
I1027 22:36:57.907042 1163409 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1535684014.tar: stat -c "%s %y" /var/lib/minikube/build/build.1535684014.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1535684014.tar': No such file or directory
I1027 22:36:57.907073 1163409 ssh_runner.go:362] scp /tmp/build.1535684014.tar --> /var/lib/minikube/build/build.1535684014.tar (3072 bytes)
I1027 22:36:57.927108 1163409 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1535684014
I1027 22:36:57.935432 1163409 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1535684014 -xf /var/lib/minikube/build/build.1535684014.tar
I1027 22:36:57.944130 1163409 crio.go:315] Building image: /var/lib/minikube/build/build.1535684014
I1027 22:36:57.944219 1163409 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-812436 /var/lib/minikube/build/build.1535684014 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1027 22:37:01.337294 1163409 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-812436 /var/lib/minikube/build/build.1535684014 --cgroup-manager=cgroupfs: (3.393033684s)
I1027 22:37:01.337366 1163409 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1535684014
I1027 22:37:01.345460 1163409 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1535684014.tar
I1027 22:37:01.353576 1163409 build_images.go:218] Built localhost/my-image:functional-812436 from /tmp/build.1535684014.tar
I1027 22:37:01.353604 1163409 build_images.go:134] succeeded building to: functional-812436
I1027 22:37:01.353610 1163409 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-812436
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image rm kicbase/echo-server:functional-812436 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-812436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-812436
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-812436
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-812436
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1027 22:39:18.107329 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m11.386267126s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (192.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 kubectl -- rollout status deployment/busybox: (3.777416438s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-bsjzj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-dxfmf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-p6jgv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-bsjzj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-dxfmf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-p6jgv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-bsjzj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-dxfmf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-p6jgv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-bsjzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-bsjzj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-dxfmf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-dxfmf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-p6jgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 kubectl -- exec busybox-7b57f96db7-p6jgv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 node add --alsologtostderr -v 5
E1027 22:40:41.167926 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.069207 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.076274 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.087890 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.109358 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.150751 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.232278 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.393620 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:14.715478 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:15.357104 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:16.638603 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:19.200541 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:41:24.322800 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 node add --alsologtostderr -v 5: (59.886241316s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5: (1.082722551s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-048384 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.094143805s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 status --output json --alsologtostderr -v 5: (1.06452153s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp testdata/cp-test.txt ha-048384:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786919824/001/cp-test_ha-048384.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384:/home/docker/cp-test.txt ha-048384-m02:/home/docker/cp-test_ha-048384_ha-048384-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test_ha-048384_ha-048384-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384:/home/docker/cp-test.txt ha-048384-m03:/home/docker/cp-test_ha-048384_ha-048384-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test_ha-048384_ha-048384-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384:/home/docker/cp-test.txt ha-048384-m04:/home/docker/cp-test_ha-048384_ha-048384-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test_ha-048384_ha-048384-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp testdata/cp-test.txt ha-048384-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786919824/001/cp-test_ha-048384-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m02:/home/docker/cp-test.txt ha-048384:/home/docker/cp-test_ha-048384-m02_ha-048384.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test.txt"
E1027 22:41:34.564266 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test_ha-048384-m02_ha-048384.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m02:/home/docker/cp-test.txt ha-048384-m03:/home/docker/cp-test_ha-048384-m02_ha-048384-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test_ha-048384-m02_ha-048384-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m02:/home/docker/cp-test.txt ha-048384-m04:/home/docker/cp-test_ha-048384-m02_ha-048384-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test_ha-048384-m02_ha-048384-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp testdata/cp-test.txt ha-048384-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786919824/001/cp-test_ha-048384-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m03:/home/docker/cp-test.txt ha-048384:/home/docker/cp-test_ha-048384-m03_ha-048384.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test_ha-048384-m03_ha-048384.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m03:/home/docker/cp-test.txt ha-048384-m02:/home/docker/cp-test_ha-048384-m03_ha-048384-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test_ha-048384-m03_ha-048384-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m03:/home/docker/cp-test.txt ha-048384-m04:/home/docker/cp-test_ha-048384-m03_ha-048384-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test_ha-048384-m03_ha-048384-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp testdata/cp-test.txt ha-048384-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786919824/001/cp-test_ha-048384-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m04:/home/docker/cp-test.txt ha-048384:/home/docker/cp-test_ha-048384-m04_ha-048384.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384 "sudo cat /home/docker/cp-test_ha-048384-m04_ha-048384.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m04:/home/docker/cp-test.txt ha-048384-m02:/home/docker/cp-test_ha-048384-m04_ha-048384-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m02 "sudo cat /home/docker/cp-test_ha-048384-m04_ha-048384-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 cp ha-048384-m04:/home/docker/cp-test.txt ha-048384-m03:/home/docker/cp-test_ha-048384-m04_ha-048384-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 ssh -n ha-048384-m03 "sudo cat /home/docker/cp-test_ha-048384-m04_ha-048384-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 node stop m02 --alsologtostderr -v 5
E1027 22:41:55.045740 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 node stop m02 --alsologtostderr -v 5: (12.047274006s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5: exit status 7 (966.880074ms)

                                                
                                                
-- stdout --
	ha-048384
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-048384-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-048384-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-048384-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:41:59.294280 1178171 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:41:59.294509 1178171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:41:59.294541 1178171 out.go:374] Setting ErrFile to fd 2...
	I1027 22:41:59.294562 1178171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:41:59.294839 1178171 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:41:59.295077 1178171 out.go:368] Setting JSON to false
	I1027 22:41:59.295166 1178171 mustload.go:66] Loading cluster: ha-048384
	I1027 22:41:59.295213 1178171 notify.go:221] Checking for updates...
	I1027 22:41:59.295662 1178171 config.go:182] Loaded profile config "ha-048384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:41:59.295704 1178171 status.go:174] checking status of ha-048384 ...
	I1027 22:41:59.296638 1178171 cli_runner.go:164] Run: docker container inspect ha-048384 --format={{.State.Status}}
	I1027 22:41:59.317638 1178171 status.go:371] ha-048384 host status = "Running" (err=<nil>)
	I1027 22:41:59.317666 1178171 host.go:66] Checking if "ha-048384" exists ...
	I1027 22:41:59.317971 1178171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-048384
	I1027 22:41:59.348091 1178171 host.go:66] Checking if "ha-048384" exists ...
	I1027 22:41:59.348383 1178171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:41:59.348422 1178171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-048384
	I1027 22:41:59.373844 1178171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34259 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/ha-048384/id_rsa Username:docker}
	I1027 22:41:59.476132 1178171 ssh_runner.go:195] Run: systemctl --version
	I1027 22:41:59.484065 1178171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:41:59.497290 1178171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:41:59.583045 1178171 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-27 22:41:59.56773422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:41:59.583612 1178171 kubeconfig.go:125] found "ha-048384" server: "https://192.168.49.254:8443"
	I1027 22:41:59.583648 1178171 api_server.go:166] Checking apiserver status ...
	I1027 22:41:59.583696 1178171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:41:59.598477 1178171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	I1027 22:41:59.607208 1178171 api_server.go:182] apiserver freezer: "2:freezer:/docker/6c6309f44d8f00221daf658859c5b71fc7cf15c5857ba50915aeb4ead8cbd8e7/crio/crio-9b8463c8be5230d200efb3a50d869accb7eb662f7ee3492bc96ee9d7956f9baf"
	I1027 22:41:59.607278 1178171 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6c6309f44d8f00221daf658859c5b71fc7cf15c5857ba50915aeb4ead8cbd8e7/crio/crio-9b8463c8be5230d200efb3a50d869accb7eb662f7ee3492bc96ee9d7956f9baf/freezer.state
	I1027 22:41:59.615760 1178171 api_server.go:204] freezer state: "THAWED"
	I1027 22:41:59.615790 1178171 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 22:41:59.624213 1178171 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 22:41:59.624244 1178171 status.go:463] ha-048384 apiserver status = Running (err=<nil>)
	I1027 22:41:59.624278 1178171 status.go:176] ha-048384 status: &{Name:ha-048384 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:41:59.624302 1178171 status.go:174] checking status of ha-048384-m02 ...
	I1027 22:41:59.624624 1178171 cli_runner.go:164] Run: docker container inspect ha-048384-m02 --format={{.State.Status}}
	I1027 22:41:59.654549 1178171 status.go:371] ha-048384-m02 host status = "Stopped" (err=<nil>)
	I1027 22:41:59.654575 1178171 status.go:384] host is not running, skipping remaining checks
	I1027 22:41:59.654582 1178171 status.go:176] ha-048384-m02 status: &{Name:ha-048384-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:41:59.654603 1178171 status.go:174] checking status of ha-048384-m03 ...
	I1027 22:41:59.654920 1178171 cli_runner.go:164] Run: docker container inspect ha-048384-m03 --format={{.State.Status}}
	I1027 22:41:59.672581 1178171 status.go:371] ha-048384-m03 host status = "Running" (err=<nil>)
	I1027 22:41:59.672606 1178171 host.go:66] Checking if "ha-048384-m03" exists ...
	I1027 22:41:59.672899 1178171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-048384-m03
	I1027 22:41:59.690335 1178171 host.go:66] Checking if "ha-048384-m03" exists ...
	I1027 22:41:59.690783 1178171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:41:59.690835 1178171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-048384-m03
	I1027 22:41:59.707415 1178171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34269 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/ha-048384-m03/id_rsa Username:docker}
	I1027 22:41:59.812087 1178171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:41:59.826491 1178171 kubeconfig.go:125] found "ha-048384" server: "https://192.168.49.254:8443"
	I1027 22:41:59.826524 1178171 api_server.go:166] Checking apiserver status ...
	I1027 22:41:59.826625 1178171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:41:59.838932 1178171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1027 22:41:59.847403 1178171 api_server.go:182] apiserver freezer: "2:freezer:/docker/f2ba3de8303ef2c192f13459136e8f1acadbb3d03e3fe053f111489c88aaa6f4/crio/crio-bf2952121e36cff487fe2f7996ad6af2bf8bb3612b35e86a85cb224d2983339a"
	I1027 22:41:59.847497 1178171 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f2ba3de8303ef2c192f13459136e8f1acadbb3d03e3fe053f111489c88aaa6f4/crio/crio-bf2952121e36cff487fe2f7996ad6af2bf8bb3612b35e86a85cb224d2983339a/freezer.state
	I1027 22:41:59.855732 1178171 api_server.go:204] freezer state: "THAWED"
	I1027 22:41:59.855812 1178171 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 22:41:59.867290 1178171 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 22:41:59.867319 1178171 status.go:463] ha-048384-m03 apiserver status = Running (err=<nil>)
	I1027 22:41:59.867330 1178171 status.go:176] ha-048384-m03 status: &{Name:ha-048384-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:41:59.867353 1178171 status.go:174] checking status of ha-048384-m04 ...
	I1027 22:41:59.867704 1178171 cli_runner.go:164] Run: docker container inspect ha-048384-m04 --format={{.State.Status}}
	I1027 22:41:59.885544 1178171 status.go:371] ha-048384-m04 host status = "Running" (err=<nil>)
	I1027 22:41:59.885571 1178171 host.go:66] Checking if "ha-048384-m04" exists ...
	I1027 22:41:59.885862 1178171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-048384-m04
	I1027 22:41:59.902864 1178171 host.go:66] Checking if "ha-048384-m04" exists ...
	I1027 22:41:59.903172 1178171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:41:59.903215 1178171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-048384-m04
	I1027 22:41:59.927219 1178171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34274 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/ha-048384-m04/id_rsa Username:docker}
	I1027 22:42:00.129187 1178171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:42:00.191625 1178171 status.go:176] ha-048384-m04 status: &{Name:ha-048384-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 node start m02 --alsologtostderr -v 5: (22.082622467s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5: (1.227289747s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.157457177s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 stop --alsologtostderr -v 5
E1027 22:42:36.008240 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 stop --alsologtostderr -v 5: (26.500521384s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 start --wait true --alsologtostderr -v 5
E1027 22:43:57.929765 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:44:18.104521 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 start --wait true --alsologtostderr -v 5: (1m49.817720661s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 node delete m03 --alsologtostderr -v 5: (10.851732038s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 stop --alsologtostderr -v 5: (35.883156804s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5: exit status 7 (124.927656ms)

                                                
                                                
-- stdout --
	ha-048384
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-048384-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-048384-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:45:30.835409 1189898 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:45:30.835598 1189898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:45:30.835630 1189898 out.go:374] Setting ErrFile to fd 2...
	I1027 22:45:30.835653 1189898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:45:30.835920 1189898 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:45:30.836152 1189898 out.go:368] Setting JSON to false
	I1027 22:45:30.836222 1189898 mustload.go:66] Loading cluster: ha-048384
	I1027 22:45:30.836284 1189898 notify.go:221] Checking for updates...
	I1027 22:45:30.836688 1189898 config.go:182] Loaded profile config "ha-048384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:45:30.836730 1189898 status.go:174] checking status of ha-048384 ...
	I1027 22:45:30.837276 1189898 cli_runner.go:164] Run: docker container inspect ha-048384 --format={{.State.Status}}
	I1027 22:45:30.856593 1189898 status.go:371] ha-048384 host status = "Stopped" (err=<nil>)
	I1027 22:45:30.856617 1189898 status.go:384] host is not running, skipping remaining checks
	I1027 22:45:30.856623 1189898 status.go:176] ha-048384 status: &{Name:ha-048384 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:45:30.856652 1189898 status.go:174] checking status of ha-048384-m02 ...
	I1027 22:45:30.856958 1189898 cli_runner.go:164] Run: docker container inspect ha-048384-m02 --format={{.State.Status}}
	I1027 22:45:30.889105 1189898 status.go:371] ha-048384-m02 host status = "Stopped" (err=<nil>)
	I1027 22:45:30.889126 1189898 status.go:384] host is not running, skipping remaining checks
	I1027 22:45:30.889133 1189898 status.go:176] ha-048384-m02 status: &{Name:ha-048384-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:45:30.889156 1189898 status.go:174] checking status of ha-048384-m04 ...
	I1027 22:45:30.889471 1189898 cli_runner.go:164] Run: docker container inspect ha-048384-m04 --format={{.State.Status}}
	I1027 22:45:30.910340 1189898 status.go:371] ha-048384-m04 host status = "Stopped" (err=<nil>)
	I1027 22:45:30.910368 1189898 status.go:384] host is not running, skipping remaining checks
	I1027 22:45:30.910405 1189898 status.go:176] ha-048384-m04 status: &{Name:ha-048384-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (75.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1027 22:46:14.068811 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 22:46:41.772243 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m14.733352578s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (75.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 node add --control-plane --alsologtostderr -v 5: (1m21.091210204s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-048384 status --alsologtostderr -v 5: (1.079249751s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.098700001s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-635165 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1027 22:49:18.106509 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-635165 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.886988821s)
--- PASS: TestJSONOutput/start/Command (79.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-635165 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-635165 --output=json --user=testUser: (5.830737759s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-160689 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-160689 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.523554ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fd6db73-e661-4e0f-8e5b-55f6b5929ad5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-160689] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a62996c-b686-4273-aa36-4ef1252fda4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21790"}}
	{"specversion":"1.0","id":"62844436-4867-47a1-a895-e5123aa2cf32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7e58855-3b5c-48d9-af6c-3a0821cccc1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig"}}
	{"specversion":"1.0","id":"1fd32514-b84a-44cd-9411-b32266a906da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube"}}
	{"specversion":"1.0","id":"da43cfd6-3dd1-4375-b630-1bd246d9ae7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a79573f0-a952-44ff-82ec-35c4a0ec2bb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"600843c1-7169-4dda-bffd-b0ed762ef054","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-160689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-160689
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-980767 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-980767 --network=: (43.81840224s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-980767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-980767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-980767: (2.224262021s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-266493 --network=bridge
E1027 22:51:14.070535 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-266493 --network=bridge: (36.625177058s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-266493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-266493
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-266493: (2.072413084s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.72s)

                                                
                                    
x
+
TestKicExistingNetwork (38.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1027 22:51:19.822584 1134735 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1027 22:51:19.842245 1134735 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1027 22:51:19.843187 1134735 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1027 22:51:19.843228 1134735 cli_runner.go:164] Run: docker network inspect existing-network
W1027 22:51:19.859557 1134735 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1027 22:51:19.859587 1134735 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1027 22:51:19.859607 1134735 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1027 22:51:19.859723 1134735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1027 22:51:19.877961 1134735 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bec5bade6d32 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b2:b8:32:37:d1:1a} reservation:<nil>}
I1027 22:51:19.878317 1134735 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001871320}
I1027 22:51:19.878340 1134735 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1027 22:51:19.878423 1134735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1027 22:51:19.935920 1134735 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-953627 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-953627 --network=existing-network: (36.237295336s)
helpers_test.go:175: Cleaning up "existing-network-953627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-953627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-953627: (2.052938924s)
I1027 22:51:58.243651 1134735 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.44s)

                                                
                                    
x
+
TestKicCustomSubnet (35.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-879777 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-879777 --subnet=192.168.60.0/24: (33.595582143s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-879777 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-879777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-879777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-879777: (2.204825757s)
--- PASS: TestKicCustomSubnet (35.83s)

                                                
                                    
x
+
TestKicStaticIP (40.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-504068 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-504068 --static-ip=192.168.200.200: (37.79285019s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-504068 ip
helpers_test.go:175: Cleaning up "static-ip-504068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-504068
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-504068: (2.236046509s)
--- PASS: TestKicStaticIP (40.18s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-405255 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-405255 --driver=docker  --container-runtime=crio: (33.670073019s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-408529 --driver=docker  --container-runtime=crio
E1027 22:54:18.106793 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-408529 --driver=docker  --container-runtime=crio: (34.08491026s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-405255
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-408529
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-408529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-408529
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-408529: (2.156663168s)
helpers_test.go:175: Cleaning up "first-405255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-405255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-405255: (2.078041222s)
--- PASS: TestMinikubeProfile (73.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-536752 --memory=3072 --mount-string /tmp/TestMountStartserial2547717666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-536752 --memory=3072 --mount-string /tmp/TestMountStartserial2547717666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.120092927s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-536752 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-538573 --memory=3072 --mount-string /tmp/TestMountStartserial2547717666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-538573 --memory=3072 --mount-string /tmp/TestMountStartserial2547717666/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.227406018s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-538573 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-536752 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-536752 --alsologtostderr -v=5: (1.87942921s)
--- PASS: TestMountStart/serial/DeleteFirst (1.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-538573 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-538573
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-538573: (1.286610922s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-538573
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-538573: (6.590339548s)
--- PASS: TestMountStart/serial/RestartStopped (7.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-538573 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-074691 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1027 22:56:14.068159 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-074691 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.272420607s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-074691 -- rollout status deployment/busybox: (4.475267599s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-4kwvk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-8xshv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-4kwvk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-8xshv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-4kwvk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-8xshv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-4kwvk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1027 22:57:21.169811 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-4kwvk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-8xshv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-074691 -- exec busybox-7b57f96db7-8xshv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-074691 -v=5 --alsologtostderr
E1027 22:57:37.134012 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-074691 -v=5 --alsologtostderr: (57.971904186s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-074691 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp testdata/cp-test.txt multinode-074691:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1222761100/001/cp-test_multinode-074691.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691:/home/docker/cp-test.txt multinode-074691-m02:/home/docker/cp-test_multinode-074691_multinode-074691-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m02 "sudo cat /home/docker/cp-test_multinode-074691_multinode-074691-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691:/home/docker/cp-test.txt multinode-074691-m03:/home/docker/cp-test_multinode-074691_multinode-074691-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m03 "sudo cat /home/docker/cp-test_multinode-074691_multinode-074691-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp testdata/cp-test.txt multinode-074691-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1222761100/001/cp-test_multinode-074691-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691-m02:/home/docker/cp-test.txt multinode-074691:/home/docker/cp-test_multinode-074691-m02_multinode-074691.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691 "sudo cat /home/docker/cp-test_multinode-074691-m02_multinode-074691.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691-m02:/home/docker/cp-test.txt multinode-074691-m03:/home/docker/cp-test_multinode-074691-m02_multinode-074691-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m03 "sudo cat /home/docker/cp-test_multinode-074691-m02_multinode-074691-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp testdata/cp-test.txt multinode-074691-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1222761100/001/cp-test_multinode-074691-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691-m03:/home/docker/cp-test.txt multinode-074691:/home/docker/cp-test_multinode-074691-m03_multinode-074691.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691 "sudo cat /home/docker/cp-test_multinode-074691-m03_multinode-074691.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 cp multinode-074691-m03:/home/docker/cp-test.txt multinode-074691-m02:/home/docker/cp-test_multinode-074691-m03_multinode-074691-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 ssh -n multinode-074691-m02 "sudo cat /home/docker/cp-test_multinode-074691-m03_multinode-074691-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-074691 node stop m03: (1.319314114s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-074691 status: exit status 7 (603.685641ms)

                                                
                                                
-- stdout --
	multinode-074691
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-074691-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-074691-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr: exit status 7 (543.946969ms)

                                                
                                                
-- stdout --
	multinode-074691
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-074691-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-074691-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 22:58:33.724251 1240302 out.go:360] Setting OutFile to fd 1 ...
	I1027 22:58:33.724442 1240302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:58:33.724473 1240302 out.go:374] Setting ErrFile to fd 2...
	I1027 22:58:33.724494 1240302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 22:58:33.724779 1240302 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 22:58:33.725001 1240302 out.go:368] Setting JSON to false
	I1027 22:58:33.725068 1240302 mustload.go:66] Loading cluster: multinode-074691
	I1027 22:58:33.725130 1240302 notify.go:221] Checking for updates...
	I1027 22:58:33.726082 1240302 config.go:182] Loaded profile config "multinode-074691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 22:58:33.726125 1240302 status.go:174] checking status of multinode-074691 ...
	I1027 22:58:33.726708 1240302 cli_runner.go:164] Run: docker container inspect multinode-074691 --format={{.State.Status}}
	I1027 22:58:33.747278 1240302 status.go:371] multinode-074691 host status = "Running" (err=<nil>)
	I1027 22:58:33.747306 1240302 host.go:66] Checking if "multinode-074691" exists ...
	I1027 22:58:33.747608 1240302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-074691
	I1027 22:58:33.773423 1240302 host.go:66] Checking if "multinode-074691" exists ...
	I1027 22:58:33.773718 1240302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:58:33.773766 1240302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-074691
	I1027 22:58:33.791500 1240302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34379 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/multinode-074691/id_rsa Username:docker}
	I1027 22:58:33.895723 1240302 ssh_runner.go:195] Run: systemctl --version
	I1027 22:58:33.902074 1240302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:58:33.914744 1240302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 22:58:33.970883 1240302 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 22:58:33.960513247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 22:58:33.971424 1240302 kubeconfig.go:125] found "multinode-074691" server: "https://192.168.67.2:8443"
	I1027 22:58:33.971460 1240302 api_server.go:166] Checking apiserver status ...
	I1027 22:58:33.971512 1240302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 22:58:33.985044 1240302 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	I1027 22:58:33.994367 1240302 api_server.go:182] apiserver freezer: "2:freezer:/docker/f7093ecb807fd171d33393ee735a1b8fcd1098f7838d472be1cae9d0c4f405f2/crio/crio-ad6d480574e782bc2032d29c20f53305ca535d1f834c04b6b85dceeea2873074"
	I1027 22:58:33.994474 1240302 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f7093ecb807fd171d33393ee735a1b8fcd1098f7838d472be1cae9d0c4f405f2/crio/crio-ad6d480574e782bc2032d29c20f53305ca535d1f834c04b6b85dceeea2873074/freezer.state
	I1027 22:58:34.003184 1240302 api_server.go:204] freezer state: "THAWED"
	I1027 22:58:34.003215 1240302 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1027 22:58:34.015603 1240302 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1027 22:58:34.015640 1240302 status.go:463] multinode-074691 apiserver status = Running (err=<nil>)
	I1027 22:58:34.015659 1240302 status.go:176] multinode-074691 status: &{Name:multinode-074691 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:58:34.015686 1240302 status.go:174] checking status of multinode-074691-m02 ...
	I1027 22:58:34.016088 1240302 cli_runner.go:164] Run: docker container inspect multinode-074691-m02 --format={{.State.Status}}
	I1027 22:58:34.034341 1240302 status.go:371] multinode-074691-m02 host status = "Running" (err=<nil>)
	I1027 22:58:34.034364 1240302 host.go:66] Checking if "multinode-074691-m02" exists ...
	I1027 22:58:34.034743 1240302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-074691-m02
	I1027 22:58:34.052461 1240302 host.go:66] Checking if "multinode-074691-m02" exists ...
	I1027 22:58:34.052788 1240302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 22:58:34.052834 1240302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-074691-m02
	I1027 22:58:34.071341 1240302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/21790-1132878/.minikube/machines/multinode-074691-m02/id_rsa Username:docker}
	I1027 22:58:34.180126 1240302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 22:58:34.193580 1240302 status.go:176] multinode-074691-m02 status: &{Name:multinode-074691-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 22:58:34.193615 1240302 status.go:174] checking status of multinode-074691-m03 ...
	I1027 22:58:34.193959 1240302 cli_runner.go:164] Run: docker container inspect multinode-074691-m03 --format={{.State.Status}}
	I1027 22:58:34.212414 1240302 status.go:371] multinode-074691-m03 host status = "Stopped" (err=<nil>)
	I1027 22:58:34.212445 1240302 status.go:384] host is not running, skipping remaining checks
	I1027 22:58:34.212453 1240302 status.go:176] multinode-074691-m03 status: &{Name:multinode-074691-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-074691 node start m03 -v=5 --alsologtostderr: (7.335416895s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-074691
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-074691
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-074691: (25.243272734s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-074691 --wait=true -v=5 --alsologtostderr
E1027 22:59:18.104829 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-074691 --wait=true -v=5 --alsologtostderr: (54.16258686s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-074691
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-074691 node delete m03: (5.341348728s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-074691 stop: (23.882111174s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-074691 status: exit status 7 (88.871156ms)

                                                
                                                
-- stdout --
	multinode-074691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-074691-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr: exit status 7 (102.886767ms)

                                                
                                                
-- stdout --
	multinode-074691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-074691-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:00:32.018345 1248075 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:00:32.018581 1248075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:00:32.018611 1248075 out.go:374] Setting ErrFile to fd 2...
	I1027 23:00:32.018630 1248075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:00:32.018943 1248075 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:00:32.019200 1248075 out.go:368] Setting JSON to false
	I1027 23:00:32.019331 1248075 mustload.go:66] Loading cluster: multinode-074691
	I1027 23:00:32.019407 1248075 notify.go:221] Checking for updates...
	I1027 23:00:32.020651 1248075 config.go:182] Loaded profile config "multinode-074691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:00:32.020685 1248075 status.go:174] checking status of multinode-074691 ...
	I1027 23:00:32.021428 1248075 cli_runner.go:164] Run: docker container inspect multinode-074691 --format={{.State.Status}}
	I1027 23:00:32.040326 1248075 status.go:371] multinode-074691 host status = "Stopped" (err=<nil>)
	I1027 23:00:32.040352 1248075 status.go:384] host is not running, skipping remaining checks
	I1027 23:00:32.040359 1248075 status.go:176] multinode-074691 status: &{Name:multinode-074691 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 23:00:32.040384 1248075 status.go:174] checking status of multinode-074691-m02 ...
	I1027 23:00:32.040717 1248075 cli_runner.go:164] Run: docker container inspect multinode-074691-m02 --format={{.State.Status}}
	I1027 23:00:32.064070 1248075 status.go:371] multinode-074691-m02 host status = "Stopped" (err=<nil>)
	I1027 23:00:32.064092 1248075 status.go:384] host is not running, skipping remaining checks
	I1027 23:00:32.064106 1248075 status.go:176] multinode-074691-m02 status: &{Name:multinode-074691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-074691 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1027 23:01:14.069155 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-074691 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.255973606s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-074691 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-074691
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-074691-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-074691-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.283379ms)

                                                
                                                
-- stdout --
	* [multinode-074691-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-074691-m02' is duplicated with machine name 'multinode-074691-m02' in profile 'multinode-074691'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-074691-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-074691-m03 --driver=docker  --container-runtime=crio: (37.21849859s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-074691
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-074691: exit status 80 (349.491253ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-074691 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-074691-m03 already exists in multinode-074691-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-074691-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-074691-m03: (2.169132436s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.90s)

                                                
                                    
x
+
TestPreload (133.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-307717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-307717 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.605514029s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-307717 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-307717 image pull gcr.io/k8s-minikube/busybox: (2.247732593s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-307717
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-307717: (5.922230377s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-307717 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-307717 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (59.41578104s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-307717 image list
helpers_test.go:175: Cleaning up "test-preload-307717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-307717
E1027 23:04:18.103821 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-307717: (2.48572682s)
--- PASS: TestPreload (133.92s)

                                                
                                    
x
+
TestScheduledStopUnix (109.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-251244 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-251244 --memory=3072 --driver=docker  --container-runtime=crio: (32.718490228s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-251244 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-251244 -n scheduled-stop-251244
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-251244 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1027 23:04:52.466155 1134735 retry.go:31] will retry after 94.29µs: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.467292 1134735 retry.go:31] will retry after 163.42µs: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.468386 1134735 retry.go:31] will retry after 271.723µs: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.469462 1134735 retry.go:31] will retry after 314.428µs: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.470579 1134735 retry.go:31] will retry after 445.847µs: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.471727 1134735 retry.go:31] will retry after 458.179µs: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.472794 1134735 retry.go:31] will retry after 782.364µs: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.473864 1134735 retry.go:31] will retry after 1.811506ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.476041 1134735 retry.go:31] will retry after 3.05582ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.479169 1134735 retry.go:31] will retry after 2.831798ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.482356 1134735 retry.go:31] will retry after 8.6127ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.491601 1134735 retry.go:31] will retry after 7.45072ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.501771 1134735 retry.go:31] will retry after 14.829887ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.517289 1134735 retry.go:31] will retry after 21.685062ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
I1027 23:04:52.539864 1134735 retry.go:31] will retry after 37.892554ms: open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/scheduled-stop-251244/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-251244 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-251244 -n scheduled-stop-251244
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-251244
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-251244 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-251244
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-251244: exit status 7 (73.165712ms)

                                                
                                                
-- stdout --
	scheduled-stop-251244
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-251244 -n scheduled-stop-251244
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-251244 -n scheduled-stop-251244: exit status 7 (75.89131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-251244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-251244
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-251244: (5.129276463s)
--- PASS: TestScheduledStopUnix (109.46s)

                                                
                                    
x
+
TestInsufficientStorage (13.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-503365 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1027 23:06:14.070524 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-503365 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.039430832s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8fdcb562-8433-4a1a-8735-fb491c887456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-503365] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"47d577bc-4b9d-4767-a6ef-7e98c741c9a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21790"}}
	{"specversion":"1.0","id":"177f5c32-1974-4d86-b390-e6fd872ce198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4cd48008-fb49-4342-b5ea-4185f0cfc0c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig"}}
	{"specversion":"1.0","id":"86ce11cc-d7bb-4404-b48b-e2fd537fecd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube"}}
	{"specversion":"1.0","id":"681de3d3-b0ba-40ff-9865-64e0e92d20bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0898f9ce-72d0-433d-a6f0-5a9f267d40cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9c4ba1ed-e6dd-4da4-b787-168006a7e4c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7339c3d8-b47d-40d3-b924-a3178bf12b44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c5d88e88-17ea-428e-9e63-b4aa5e15ff4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e01f7984-21cf-4d56-8f6b-75307aecd3fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2a826d84-e8f5-457e-9656-5face6708973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-503365\" primary control-plane node in \"insufficient-storage-503365\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3249641-8b49-496c-b44a-521dab1dd166","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"aad99d71-e2e3-4d0f-9fa0-f0a9af62657c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a49107cc-abc4-4d5e-8d75-cfcda4b23c4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-503365 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-503365 --output=json --layout=cluster: exit status 7 (307.943844ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-503365","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-503365","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 23:06:19.994876 1264293 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-503365" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-503365 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-503365 --output=json --layout=cluster: exit status 7 (303.990728ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-503365","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-503365","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 23:06:20.303488 1264358 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-503365" does not appear in /home/jenkins/minikube-integration/21790-1132878/kubeconfig
	E1027 23:06:20.313611 1264358 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/insufficient-storage-503365/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-503365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-503365
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-503365: (1.955251211s)
--- PASS: TestInsufficientStorage (13.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2673575738 start -p running-upgrade-492039 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2673575738 start -p running-upgrade-492039 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.129371676s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-492039 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-492039 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.690611374s)
helpers_test.go:175: Cleaning up "running-upgrade-492039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-492039
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-492039: (2.288038435s)
--- PASS: TestRunningBinaryUpgrade (61.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.710459422s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-767102
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-767102: (1.36121224s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-767102 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-767102 status --format={{.Host}}: exit status 7 (73.27059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1027 23:11:14.070255 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.090705848s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-767102 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (126.090736ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-767102] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-767102
	    minikube start -p kubernetes-upgrade-767102 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7671022 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-767102 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-767102 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.864209316s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-767102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-767102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-767102: (2.440345084s)
--- PASS: TestKubernetesUpgrade (356.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (106.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1768234849 start -p missing-upgrade-893407 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1768234849 start -p missing-upgrade-893407 --memory=3072 --driver=docker  --container-runtime=crio: (56.864831193s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-893407
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-893407
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-893407 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1027 23:14:01.171268 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:14:17.138127 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:14:18.104069 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-893407 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.084951141s)
helpers_test.go:175: Cleaning up "missing-upgrade-893407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-893407
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-893407: (2.074056646s)
--- PASS: TestMissingContainerUpgrade (106.72s)

                                                
                                    
x
+
TestPause/serial/Start (93.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-180608 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-180608 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m33.268946021s)
--- PASS: TestPause/serial/Start (93.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759801 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-759801 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (108.314377ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-759801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759801 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759801 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.957129976s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-759801 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759801 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759801 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (18.354740198s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-759801 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-759801 status -o json: exit status 2 (353.571152ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-759801","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-759801
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-759801: (2.051583487s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759801 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759801 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.818011384s)
--- PASS: TestNoKubernetes/serial/Start (8.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-759801 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-759801 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.517501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-759801
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-759801: (1.30289519s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-759801 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-759801 --driver=docker  --container-runtime=crio: (6.967658851s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-759801 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-759801 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.460291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-440075 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-440075 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (204.835142ms)

                                                
                                                
-- stdout --
	* [false-440075] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 23:07:49.737731 1274138 out.go:360] Setting OutFile to fd 1 ...
	I1027 23:07:49.738025 1274138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:07:49.738057 1274138 out.go:374] Setting ErrFile to fd 2...
	I1027 23:07:49.738077 1274138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 23:07:49.738363 1274138 root.go:340] Updating PATH: /home/jenkins/minikube-integration/21790-1132878/.minikube/bin
	I1027 23:07:49.738894 1274138 out.go:368] Setting JSON to false
	I1027 23:07:49.739959 1274138 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21019,"bootTime":1761585451,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1027 23:07:49.740066 1274138 start.go:143] virtualization:  
	I1027 23:07:49.743528 1274138 out.go:179] * [false-440075] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 23:07:49.747394 1274138 out.go:179]   - MINIKUBE_LOCATION=21790
	I1027 23:07:49.747478 1274138 notify.go:221] Checking for updates...
	I1027 23:07:49.753433 1274138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 23:07:49.756459 1274138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21790-1132878/kubeconfig
	I1027 23:07:49.759453 1274138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21790-1132878/.minikube
	I1027 23:07:49.762316 1274138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 23:07:49.765096 1274138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 23:07:49.768605 1274138 config.go:182] Loaded profile config "pause-180608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 23:07:49.768698 1274138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1027 23:07:49.798354 1274138 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 23:07:49.798520 1274138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 23:07:49.875299 1274138 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 23:07:49.865864242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 23:07:49.875407 1274138 docker.go:318] overlay module found
	I1027 23:07:49.878444 1274138 out.go:179] * Using the docker driver based on user configuration
	I1027 23:07:49.881217 1274138 start.go:307] selected driver: docker
	I1027 23:07:49.881239 1274138 start.go:928] validating driver "docker" against <nil>
	I1027 23:07:49.881262 1274138 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 23:07:49.884894 1274138 out.go:203] 
	W1027 23:07:49.887810 1274138 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1027 23:07:49.890614 1274138 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-440075 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-440075" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 23:07:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-180608
contexts:
- context:
cluster: pause-180608
extensions:
- extension:
last-update: Mon, 27 Oct 2025 23:07:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-180608
name: pause-180608
current-context: pause-180608
kind: Config
preferences: {}
users:
- name: pause-180608
user:
client-certificate: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.crt
client-key: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-440075

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-440075"

                                                
                                                
----------------------- debugLogs end: false-440075 [took: 3.578261916s] --------------------------------
helpers_test.go:175: Cleaning up "false-440075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-440075
--- PASS: TestNetworkPlugins/group/false (3.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-180608 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-180608 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.009185187s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (61.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3168378064 start -p stopped-upgrade-383294 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3168378064 start -p stopped-upgrade-383294 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.371791586s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3168378064 -p stopped-upgrade-383294 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3168378064 -p stopped-upgrade-383294 stop: (1.36973049s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-383294 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-383294 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.151675704s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (61.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1027 23:16:14.068688 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m29.011896643s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-383294
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-383294: (1.788892951s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.23493928s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-440075 "pgrep -a kubelet"
I1027 23:17:31.737785 1134735 config.go:182] Loaded profile config "auto-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-440075 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vv9js" [f84836d5-9a1b-41cb-9477-acc46d0eca59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vv9js" [f84836d5-9a1b-41cb-9477-acc46d0eca59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003645935s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-440075 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.049887667s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-9pps7" [16c1c81c-64b2-4c67-8863-155cdb9a81e7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003748227s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-440075 "pgrep -a kubelet"
I1027 23:18:15.600515 1134735 config.go:182] Loaded profile config "kindnet-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-440075 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-grjf9" [d17e4816-3326-4bf2-bbda-9997d22370f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-grjf9" [d17e4816-3326-4bf2-bbda-9997d22370f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003496732s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-440075 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.257169409s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6x5v5" [d8da4a2d-10c2-4d35-9420-d4ef868912c6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-6x5v5" [d8da4a2d-10c2-4d35-9420-d4ef868912c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005448212s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-440075 "pgrep -a kubelet"
I1027 23:19:17.144755 1134735 config.go:182] Loaded profile config "calico-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-440075 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-62zvk" [67ecb411-bc9e-40f2-9ea7-2185f7bf1139] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 23:19:18.104088 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-62zvk" [67ecb411-bc9e-40f2-9ea7-2185f7bf1139] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003830872s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-440075 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.721115641s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-440075 "pgrep -a kubelet"
I1027 23:20:00.822719 1134735 config.go:182] Loaded profile config "custom-flannel-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-440075 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d2gwv" [5fb54b6d-08f4-403e-a1f2-81ecabe091b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d2gwv" [5fb54b6d-08f4-403e-a1f2-81ecabe091b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00501993s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-440075 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.802289548s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-440075 "pgrep -a kubelet"
I1027 23:21:09.973317 1134735 config.go:182] Loaded profile config "enable-default-cni-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-440075 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m544j" [1a78e67c-987c-4e06-ad50-d40e33838e37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 23:21:14.068636 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-m544j" [1a78e67c-987c-4e06-ad50-d40e33838e37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003906625s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-440075 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-d9vhk" [81f2b462-6255-488f-8574-7bd9e45bacf7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004323635s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-440075 "pgrep -a kubelet"
I1027 23:21:41.248002 1134735 config.go:182] Loaded profile config "flannel-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-440075 replace --force -f testdata/netcat-deployment.yaml
I1027 23:21:41.483348 1134735 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gsdbf" [31e7256f-9bef-4e63-aae4-7f56961a4a3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gsdbf" [31e7256f-9bef-4e63-aae4-7f56961a4a3c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003825999s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-440075 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m21.735447882s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-440075 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (65.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1027 23:22:32.080468 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:32.086840 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:32.098277 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:32.119682 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:32.161069 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:32.242516 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:32.404389 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:32.726051 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:33.368146 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:34.649464 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:37.211067 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:42.332784 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:22:52.574705 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m5.079485279s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (65.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-440075 "pgrep -a kubelet"
I1027 23:23:04.825057 1134735 config.go:182] Loaded profile config "bridge-440075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-440075 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xgl5n" [2f3b672f-6400-48b5-a646-ff71cde819e1] Pending
E1027 23:23:09.200320 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:09.206701 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:09.218084 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:09.239438 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:09.281390 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:09.362763 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:09.524120 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:09.845924 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:10.487729 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xgl5n" [2f3b672f-6400-48b5-a646-ff71cde819e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 23:23:11.769477 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xgl5n" [2f3b672f-6400-48b5-a646-ff71cde819e1] Running
E1027 23:23:13.056413 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:14.331253 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.003587388s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-440075 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-440075 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-477179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d61db7c2-37e3-45dd-a444-eb086de138ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d61db7c2-37e3-45dd-a444-eb086de138ff] Running
E1027 23:23:29.694900 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00381s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-477179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-477179 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-477179 --alsologtostderr -v=3: (12.225158016s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m14.351130506s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179: exit status 7 (108.839391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-477179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1027 23:23:50.176737 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:23:54.018566 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:10.710578 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:10.716842 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:10.728151 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:10.749440 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:10.790758 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:10.872096 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:11.034015 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:11.356136 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:11.997968 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:13.279676 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:15.841620 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:18.104829 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:20.963620 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:31.139027 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:24:31.205390 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-477179 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.801632884s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477179 -n old-k8s-version-477179
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hnmb4" [9af278b5-b4c3-4acf-a098-ffd7b10c75e5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003274438s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hnmb4" [9af278b5-b4c3-4acf-a098-ffd7b10c75e5] Running
E1027 23:24:51.686815 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003651508s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-477179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-477179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-947754 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [436727ba-f898-49e4-ae12-49daa555d6ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [436727ba-f898-49e4-ae12-49daa555d6ba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00490122s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-947754 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.137902597s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-947754 --alsologtostderr -v=3
E1027 23:25:06.546555 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/custom-flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:25:11.668624 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/custom-flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:25:15.940956 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-947754 --alsologtostderr -v=3: (12.304544111s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754: exit status 7 (99.014722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-947754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 23:25:21.910660 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/custom-flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:25:32.649015 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:25:42.392875 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/custom-flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:25:53.060893 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.339198 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.345669 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.357126 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.378528 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.419932 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.501358 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.662895 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:10.984446 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:11.626482 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:12.908058 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-947754 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.809979082s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-947754 -n no-preload-947754
E1027 23:26:14.068439 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/functional-812436/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zxvvw" [4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31] Running
E1027 23:26:15.469720 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003735714s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zxvvw" [4bbaec9e-8f8f-4fa3-a0c2-09c0878f6f31] Running
E1027 23:26:20.592253 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:23.354530 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/custom-flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003289058s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-947754 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-947754 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-790322 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [99fa1637-d815-4323-b100-31f27733f2dc] Pending
helpers_test.go:352: "busybox" [99fa1637-d815-4323-b100-31f27733f2dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1027 23:26:34.925281 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:34.931684 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:34.943057 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:34.964523 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:35.005894 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:35.087139 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:35.248822 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [99fa1637-d815-4323-b100-31f27733f2dc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003380078s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-790322 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 23:26:36.212260 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:37.493509 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:40.055292 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.645601453s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-790322 --alsologtostderr -v=3
E1027 23:26:45.179668 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:51.315651 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:54.570350 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:26:55.430953 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-790322 --alsologtostderr -v=3: (12.328586819s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322: exit status 7 (92.196348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-790322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (61.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 23:27:15.912952 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:27:32.080490 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:27:32.277365 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:27:45.277090 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/custom-flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:27:56.875088 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-790322 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m1.240327109s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790322 -n embed-certs-790322
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (61.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m4ssq" [00ed63f7-8d59-4ed6-84ce-e3dc2e39663d] Running
E1027 23:27:59.782610 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/auto-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003587555s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-336451 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4e6e40f3-3676-46f6-b448-f5622cc908a9] Pending
helpers_test.go:352: "busybox" [4e6e40f3-3676-46f6-b448-f5622cc908a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4e6e40f3-3676-46f6-b448-f5622cc908a9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003504689s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-336451 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m4ssq" [00ed63f7-8d59-4ed6-84ce-e3dc2e39663d] Running
E1027 23:28:05.228649 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:05.235034 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:05.246369 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:05.267918 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:05.309416 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:05.391092 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:05.552565 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:05.873839 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:06.515607 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:07.797878 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:09.200150 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003776423s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-790322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790322 image list --format=json
E1027 23:28:10.359759 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-336451 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-336451 --alsologtostderr -v=3: (12.332185748s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 23:28:24.238395 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:24.244990 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:24.256411 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:24.277866 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:24.319297 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:24.400822 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:24.562569 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:24.884302 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:25.525737 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:25.724360 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.303970965s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451: exit status 7 (110.699358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-336451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 23:28:26.807638 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:29.369773 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:34.491317 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:36.902989 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/kindnet-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:44.733079 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/old-k8s-version-477179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:46.206505 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/bridge-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:28:54.198759 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/enable-default-cni-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-336451 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.803085808s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-336451 -n default-k8s-diff-port-336451
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-852936 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-852936 --alsologtostderr -v=3: (2.20333365s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936: exit status 7 (70.990345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-852936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 23:29:10.710629 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/calico-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:29:18.104763 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/addons-789752/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 23:29:18.796860 1134735 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/flannel-440075/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-852936 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.784376844s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-852936 -n newest-cni-852936
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9qnl7" [b7431c94-0d43-4b74-900a-1d361016710a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004424954s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-852936 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9qnl7" [b7431c94-0d43-4b74-900a-1d361016710a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003574132s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-336451 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-336451 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-332028 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-332028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-332028
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:34: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-440075 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-440075" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 23:07:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-180608
contexts:
- context:
cluster: pause-180608
extensions:
- extension:
last-update: Mon, 27 Oct 2025 23:07:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-180608
name: pause-180608
current-context: pause-180608
kind: Config
preferences: {}
users:
- name: pause-180608
user:
client-certificate: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.crt
client-key: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-440075

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-440075"

                                                
                                                
----------------------- debugLogs end: kubenet-440075 [took: 3.413745773s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-440075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-440075
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-440075 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-440075" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21790-1132878/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 23:07:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-180608
contexts:
- context:
cluster: pause-180608
extensions:
- extension:
last-update: Mon, 27 Oct 2025 23:07:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-180608
name: pause-180608
current-context: pause-180608
kind: Config
preferences: {}
users:
- name: pause-180608
user:
client-certificate: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.crt
client-key: /home/jenkins/minikube-integration/21790-1132878/.minikube/profiles/pause-180608/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-440075

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-440075" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-440075"

                                                
                                                
----------------------- debugLogs end: cilium-440075 [took: 4.514560866s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-440075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-440075
--- SKIP: TestNetworkPlugins/group/cilium (4.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-247293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-247293
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard